00:00:00.000 Started by upstream project "autotest-per-patch" build number 126160 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23864 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.094 Fetching changes from the remote Git repository 00:00:00.098 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.204 > git --version # 'git version 2.39.2' 00:00:00.204 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/40/22240/21 # timeout=5 00:00:04.411 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.423 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.433 Checking out Revision b0ebb039b16703d64cc7534b6e0fa0780ed1e683 (FETCH_HEAD) 00:00:04.433 > git config core.sparsecheckout # timeout=10 00:00:04.444 > git read-tree -mu HEAD # timeout=10 00:00:04.460 > git checkout -f b0ebb039b16703d64cc7534b6e0fa0780ed1e683 # timeout=5 00:00:04.482 Commit message: "jenkins/jjb-config: Add support for native DPDK build into docker-autoruner" 00:00:04.482 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.604 [Pipeline] Start of Pipeline 00:00:04.618 [Pipeline] library 00:00:04.619 Loading library shm_lib@master 00:00:04.619 Library shm_lib@master is cached. Copying from home. 00:00:04.637 [Pipeline] node 00:00:04.651 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:04.652 [Pipeline] { 00:00:04.663 [Pipeline] catchError 00:00:04.665 [Pipeline] { 00:00:04.678 [Pipeline] wrap 00:00:04.687 [Pipeline] { 00:00:04.695 [Pipeline] stage 00:00:04.697 [Pipeline] { (Prologue) 00:00:04.864 [Pipeline] sh 00:00:05.144 + logger -p user.info -t JENKINS-CI 00:00:05.159 [Pipeline] echo 00:00:05.160 Node: WFP19 00:00:05.166 [Pipeline] sh 00:00:05.454 [Pipeline] setCustomBuildProperty 00:00:05.466 [Pipeline] echo 00:00:05.467 Cleanup processes 00:00:05.471 [Pipeline] sh 00:00:05.749 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.749 1566568 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.761 [Pipeline] sh 00:00:06.039 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.039 ++ grep -v 'sudo pgrep' 00:00:06.039 ++ awk '{print $1}' 00:00:06.039 + sudo kill -9 00:00:06.039 + true 00:00:06.050 [Pipeline] cleanWs 00:00:06.058 [WS-CLEANUP] Deleting project workspace... 00:00:06.058 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.064 [WS-CLEANUP] done 00:00:06.068 [Pipeline] setCustomBuildProperty 00:00:06.081 [Pipeline] sh 00:00:06.360 + sudo git config --global --replace-all safe.directory '*' 00:00:06.444 [Pipeline] httpRequest 00:00:06.474 [Pipeline] echo 00:00:06.475 Sorcerer 10.211.164.101 is alive 00:00:06.480 [Pipeline] httpRequest 00:00:06.485 HttpMethod: GET 00:00:06.485 URL: http://10.211.164.101/packages/jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:06.486 Sending request to url: http://10.211.164.101/packages/jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:06.487 Response Code: HTTP/1.1 200 OK 00:00:06.488 Success: Status code 200 is in the accepted range: 200,404 00:00:06.488 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:07.550 [Pipeline] sh 00:00:07.831 + tar --no-same-owner -xf jbp_b0ebb039b16703d64cc7534b6e0fa0780ed1e683.tar.gz 00:00:07.846 [Pipeline] httpRequest 00:00:07.870 [Pipeline] echo 00:00:07.871 Sorcerer 10.211.164.101 is alive 00:00:07.879 [Pipeline] httpRequest 00:00:07.883 HttpMethod: GET 00:00:07.883 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.884 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.885 Response Code: HTTP/1.1 200 OK 00:00:07.886 Success: Status code 200 is in the accepted range: 200,404 00:00:07.886 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:31.245 [Pipeline] sh 00:00:31.526 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:34.077 [Pipeline] sh 00:00:34.361 + git -C spdk log --oneline -n5 00:00:34.361 719d03c6a sock/uring: only register net impl if supported 00:00:34.361 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:34.361 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:34.361 6c7c1f57e accel: add sequence outstanding stat 00:00:34.361 3bc8e6a26 accel: add utility to put task 00:00:34.380 [Pipeline] } 00:00:34.398 [Pipeline] // stage 00:00:34.407 [Pipeline] stage 00:00:34.409 [Pipeline] { (Prepare) 00:00:34.428 [Pipeline] writeFile 00:00:34.444 [Pipeline] sh 00:00:34.726 + logger -p user.info -t JENKINS-CI 00:00:34.742 [Pipeline] sh 00:00:35.027 + logger -p user.info -t JENKINS-CI 00:00:35.039 [Pipeline] sh 00:00:35.343 + cat autorun-spdk.conf 00:00:35.343 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.343 SPDK_TEST_BLOCKDEV=1 00:00:35.343 SPDK_TEST_ISAL=1 00:00:35.343 SPDK_TEST_CRYPTO=1 00:00:35.343 SPDK_TEST_REDUCE=1 00:00:35.343 SPDK_TEST_VBDEV_COMPRESS=1 00:00:35.343 SPDK_RUN_UBSAN=1 00:00:35.350 RUN_NIGHTLY=0 00:00:35.355 [Pipeline] readFile 00:00:35.382 [Pipeline] withEnv 00:00:35.384 [Pipeline] { 00:00:35.399 [Pipeline] sh 00:00:35.683 + set -ex 00:00:35.683 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:35.683 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:35.683 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.683 ++ SPDK_TEST_BLOCKDEV=1 00:00:35.683 ++ SPDK_TEST_ISAL=1 00:00:35.683 ++ SPDK_TEST_CRYPTO=1 00:00:35.683 ++ SPDK_TEST_REDUCE=1 00:00:35.683 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:35.683 ++ SPDK_RUN_UBSAN=1 00:00:35.683 ++ RUN_NIGHTLY=0 00:00:35.683 + case $SPDK_TEST_NVMF_NICS in 00:00:35.683 + DRIVERS= 00:00:35.683 + [[ -n '' ]] 00:00:35.683 + exit 0 00:00:35.692 [Pipeline] } 00:00:35.708 [Pipeline] // withEnv 00:00:35.715 [Pipeline] } 00:00:35.731 [Pipeline] // stage 00:00:35.743 [Pipeline] catchError 00:00:35.745 [Pipeline] { 00:00:35.761 [Pipeline] timeout 00:00:35.762 Timeout set to expire in 40 min 00:00:35.763 [Pipeline] { 00:00:35.781 [Pipeline] stage 00:00:35.783 [Pipeline] { (Tests) 00:00:35.803 [Pipeline] sh 00:00:36.091 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:36.091 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:36.091 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:36.091 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:36.091 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:36.091 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:36.091 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:36.091 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:36.091 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:36.091 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:36.091 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:36.091 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:36.091 + source /etc/os-release 00:00:36.091 ++ NAME='Fedora Linux' 00:00:36.091 ++ VERSION='38 (Cloud Edition)' 00:00:36.091 ++ ID=fedora 00:00:36.091 ++ VERSION_ID=38 00:00:36.091 ++ VERSION_CODENAME= 00:00:36.091 ++ PLATFORM_ID=platform:f38 00:00:36.091 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.091 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.091 ++ LOGO=fedora-logo-icon 00:00:36.091 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.091 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.091 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.091 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.091 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.091 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.091 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.091 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.091 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.091 ++ SUPPORT_END=2024-05-14 00:00:36.091 ++ VARIANT='Cloud Edition' 00:00:36.091 ++ VARIANT_ID=cloud 00:00:36.091 + uname -a 00:00:36.091 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.091 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:39.384 Hugepages 00:00:39.384 node hugesize free / total 00:00:39.384 node0 1048576kB 0 / 0 00:00:39.384 node0 2048kB 0 / 0 00:00:39.384 node1 1048576kB 0 / 0 00:00:39.384 node1 2048kB 0 / 0 00:00:39.384 00:00:39.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:39.384 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:39.384 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:39.384 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:39.384 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:39.384 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:39.384 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:39.384 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:39.385 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:39.385 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:39.644 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:39.644 + rm -f /tmp/spdk-ld-path 00:00:39.644 + source autorun-spdk.conf 00:00:39.644 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.644 ++ SPDK_TEST_BLOCKDEV=1 00:00:39.644 ++ SPDK_TEST_ISAL=1 00:00:39.644 ++ SPDK_TEST_CRYPTO=1 00:00:39.644 ++ SPDK_TEST_REDUCE=1 00:00:39.644 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:39.644 ++ SPDK_RUN_UBSAN=1 00:00:39.644 ++ RUN_NIGHTLY=0 00:00:39.644 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:39.644 + [[ -n '' ]] 00:00:39.644 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:39.644 + for M in /var/spdk/build-*-manifest.txt 00:00:39.644 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:39.644 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:39.644 + for M in /var/spdk/build-*-manifest.txt 00:00:39.644 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:39.644 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:39.644 ++ uname 00:00:39.644 + [[ Linux == \L\i\n\u\x ]] 00:00:39.644 + sudo dmesg -T 00:00:39.644 + sudo dmesg --clear 00:00:39.644 + dmesg_pid=1567629 00:00:39.644 + [[ Fedora Linux == FreeBSD ]] 00:00:39.644 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.644 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.644 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:39.644 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.644 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.644 + [[ -x /usr/src/fio-static/fio ]] 00:00:39.644 + sudo dmesg -Tw 00:00:39.644 + export FIO_BIN=/usr/src/fio-static/fio 00:00:39.644 + FIO_BIN=/usr/src/fio-static/fio 00:00:39.644 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:39.644 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:39.644 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:39.644 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.644 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.644 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:39.644 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.644 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.644 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:39.644 Test configuration: 00:00:39.644 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.644 SPDK_TEST_BLOCKDEV=1 00:00:39.644 SPDK_TEST_ISAL=1 00:00:39.644 SPDK_TEST_CRYPTO=1 00:00:39.644 SPDK_TEST_REDUCE=1 00:00:39.644 SPDK_TEST_VBDEV_COMPRESS=1 00:00:39.644 SPDK_RUN_UBSAN=1 00:00:39.903 RUN_NIGHTLY=0 10:08:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:39.903 10:08:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:39.903 10:08:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:39.903 10:08:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:39.904 10:08:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.904 10:08:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.904 10:08:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.904 10:08:04 -- paths/export.sh@5 -- $ export PATH 00:00:39.904 10:08:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.904 10:08:04 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:39.904 10:08:04 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:39.904 10:08:04 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721030884.XXXXXX 00:00:39.904 10:08:04 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721030884.iOhmQV 00:00:39.904 10:08:04 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:39.904 10:08:04 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:39.904 10:08:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:39.904 10:08:04 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:39.904 10:08:04 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:39.904 10:08:04 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:39.904 10:08:04 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:39.904 10:08:04 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.904 10:08:04 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:39.904 10:08:04 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:39.904 10:08:04 -- pm/common@17 -- $ local monitor 00:00:39.904 10:08:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.904 10:08:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.904 10:08:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.904 10:08:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.904 10:08:04 -- pm/common@21 -- $ date +%s 00:00:39.904 10:08:04 -- pm/common@25 -- $ sleep 1 00:00:39.904 10:08:04 -- pm/common@21 -- $ date +%s 00:00:39.904 10:08:04 -- pm/common@21 -- $ date +%s 00:00:39.904 10:08:04 -- pm/common@21 -- $ date +%s 00:00:39.904 10:08:04 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030884 00:00:39.904 10:08:04 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030884 00:00:39.904 10:08:04 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030884 00:00:39.904 10:08:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030884 00:00:39.904 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030884_collect-vmstat.pm.log 00:00:39.904 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030884_collect-cpu-load.pm.log 00:00:39.904 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030884_collect-cpu-temp.pm.log 00:00:39.904 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030884_collect-bmc-pm.bmc.pm.log 00:00:40.843 10:08:05 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:40.843 10:08:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:40.843 10:08:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:40.843 10:08:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:40.843 10:08:05 -- spdk/autobuild.sh@16 -- $ date -u 00:00:40.843 Mon Jul 15 08:08:05 AM UTC 2024 00:00:40.843 10:08:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:40.843 v24.09-pre-202-g719d03c6a 00:00:40.843 10:08:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:40.843 10:08:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:40.843 10:08:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:40.843 10:08:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:40.843 10:08:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:40.843 10:08:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.843 ************************************ 00:00:40.843 START TEST ubsan 00:00:40.843 ************************************ 00:00:40.843 10:08:05 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:40.843 using ubsan 00:00:40.843 00:00:40.843 real 0m0.000s 00:00:40.843 user 0m0.000s 00:00:40.843 sys 0m0.000s 00:00:40.843 10:08:05 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:40.843 10:08:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:40.843 ************************************ 00:00:40.843 END TEST ubsan 00:00:40.843 ************************************ 00:00:41.102 10:08:05 -- common/autotest_common.sh@1142 -- $ return 0 00:00:41.102 10:08:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:41.103 10:08:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:41.103 10:08:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:41.103 10:08:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:41.103 10:08:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:41.103 10:08:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:41.103 10:08:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:41.103 10:08:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:41.103 10:08:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:41.103 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:41.103 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:41.362 Using 'verbs' RDMA provider 00:00:54.954 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:09.845 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:09.845 Creating mk/config.mk...done. 00:01:09.845 Creating mk/cc.flags.mk...done. 00:01:09.845 Type 'make' to build. 00:01:09.845 10:08:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:09.845 10:08:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.845 10:08:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.845 10:08:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.845 ************************************ 00:01:09.845 START TEST make 00:01:09.845 ************************************ 00:01:09.845 10:08:33 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:09.845 make[1]: Nothing to be done for 'all'. 00:01:36.392 The Meson build system 00:01:36.392 Version: 1.3.1 00:01:36.392 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:36.392 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:36.392 Build type: native build 00:01:36.392 Program cat found: YES (/usr/bin/cat) 00:01:36.392 Project name: DPDK 00:01:36.392 Project version: 24.03.0 00:01:36.392 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.392 C linker for the host machine: cc ld.bfd 2.39-16 00:01:36.392 Host machine cpu family: x86_64 00:01:36.392 Host machine cpu: x86_64 00:01:36.392 Message: ## Building in Developer Mode ## 00:01:36.392 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.392 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.392 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.392 Program python3 found: YES (/usr/bin/python3) 00:01:36.392 Program cat found: YES (/usr/bin/cat) 00:01:36.392 Compiler for C supports arguments -march=native: YES 00:01:36.392 Checking for size of "void *" : 8 00:01:36.392 Checking for size of "void *" : 8 (cached) 00:01:36.392 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:36.392 Library m found: YES 00:01:36.392 Library numa found: YES 00:01:36.392 Has header "numaif.h" : YES 00:01:36.392 Library fdt found: NO 00:01:36.392 Library execinfo found: NO 00:01:36.392 Has header "execinfo.h" : YES 00:01:36.392 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.392 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.392 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.392 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.392 Run-time dependency openssl found: YES 3.0.9 00:01:36.392 Run-time dependency libpcap found: YES 1.10.4 00:01:36.392 Has header "pcap.h" with dependency libpcap: YES 00:01:36.392 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.392 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.392 Compiler for C supports arguments -Wformat: YES 00:01:36.392 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.392 Compiler for C supports arguments -Wformat-security: NO 00:01:36.392 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.392 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.392 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.392 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.392 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.392 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.392 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.392 Compiler for C supports arguments -Wundef: YES 00:01:36.392 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.392 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.392 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.392 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.392 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.392 Program objdump found: YES (/usr/bin/objdump) 00:01:36.392 Compiler for C supports arguments -mavx512f: YES 00:01:36.392 Checking if "AVX512 checking" compiles: YES 00:01:36.392 Fetching value of define "__SSE4_2__" : 1 00:01:36.392 Fetching value of define "__AES__" : 1 00:01:36.392 Fetching value of define "__AVX__" : 1 00:01:36.392 Fetching value of define "__AVX2__" : 1 00:01:36.392 Fetching value of define "__AVX512BW__" : 1 00:01:36.392 Fetching value of define "__AVX512CD__" : 1 00:01:36.392 Fetching value of define "__AVX512DQ__" : 1 00:01:36.392 Fetching value of define "__AVX512F__" : 1 00:01:36.392 Fetching value of define "__AVX512VL__" : 1 00:01:36.392 Fetching value of define "__PCLMUL__" : 1 00:01:36.392 Fetching value of define "__RDRND__" : 1 00:01:36.392 Fetching value of define "__RDSEED__" : 1 00:01:36.392 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.392 Fetching value of define "__znver1__" : (undefined) 00:01:36.392 Fetching value of define "__znver2__" : (undefined) 00:01:36.392 Fetching value of define "__znver3__" : (undefined) 00:01:36.392 Fetching value of define "__znver4__" : (undefined) 00:01:36.392 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.392 Message: lib/log: Defining dependency "log" 00:01:36.392 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.392 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.392 Checking for function "getentropy" : NO 00:01:36.392 Message: lib/eal: Defining dependency "eal" 00:01:36.392 Message: lib/ring: Defining dependency "ring" 00:01:36.392 Message: lib/rcu: Defining dependency "rcu" 00:01:36.392 Message: lib/mempool: Defining dependency "mempool" 00:01:36.392 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.392 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.392 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.392 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.392 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.392 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.392 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:36.392 Compiler for C supports arguments -mpclmul: YES 00:01:36.392 Compiler for C supports arguments -maes: YES 00:01:36.392 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.392 Compiler for C supports arguments -mavx512bw: YES 00:01:36.392 Compiler for C supports arguments -mavx512dq: YES 00:01:36.392 Compiler for C supports arguments -mavx512vl: YES 00:01:36.392 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.392 Compiler for C supports arguments -mavx2: YES 00:01:36.392 Compiler for C supports arguments -mavx: YES 00:01:36.392 Message: lib/net: Defining dependency "net" 00:01:36.392 Message: lib/meter: Defining dependency "meter" 00:01:36.392 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.392 Message: lib/pci: Defining dependency "pci" 00:01:36.392 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.392 Message: lib/hash: Defining dependency "hash" 00:01:36.392 Message: lib/timer: Defining dependency "timer" 00:01:36.392 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.392 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.392 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.392 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.392 Message: lib/power: Defining dependency "power" 00:01:36.392 Message: lib/reorder: Defining dependency "reorder" 00:01:36.392 Message: lib/security: Defining dependency "security" 00:01:36.392 Has header "linux/userfaultfd.h" : YES 00:01:36.392 Has header "linux/vduse.h" : YES 00:01:36.392 Message: lib/vhost: Defining dependency "vhost" 00:01:36.392 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.392 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:36.392 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.392 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.392 Compiler for C supports arguments -std=c11: YES 00:01:36.392 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:36.392 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:36.392 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:36.392 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:36.392 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:36.392 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:36.392 Library mtcr_ul found: NO 00:01:36.392 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:36.392 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:38.987 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:38.987 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:38.987 Configuring mlx5_autoconf.h using configuration 00:01:38.987 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:38.987 Run-time dependency libcrypto found: YES 3.0.9 00:01:38.987 Library IPSec_MB found: YES 00:01:38.987 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:38.987 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:38.987 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:38.987 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:38.987 Library IPSec_MB found: YES 00:01:38.987 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:38.987 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:38.987 Compiler for C supports arguments -std=c11: YES (cached) 00:01:38.987 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:38.987 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:38.987 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:38.987 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:38.987 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:38.987 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:38.987 Library libisal found: NO 00:01:38.987 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:38.987 Compiler for C supports arguments -std=c11: YES (cached) 00:01:38.987 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:38.987 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:38.987 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:38.987 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:38.987 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:38.987 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:38.987 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:38.987 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:38.987 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:38.987 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:38.987 Program doxygen found: YES (/usr/bin/doxygen) 00:01:38.987 Configuring doxy-api-html.conf using configuration 00:01:38.987 Configuring doxy-api-man.conf using configuration 00:01:38.987 Program mandb found: YES (/usr/bin/mandb) 00:01:38.987 Program sphinx-build found: NO 00:01:38.987 Configuring rte_build_config.h using configuration 00:01:38.987 Message: 00:01:38.987 ================= 00:01:38.987 Applications Enabled 00:01:38.987 ================= 00:01:38.987 00:01:38.987 apps: 00:01:38.987 00:01:38.987 00:01:38.987 Message: 00:01:38.987 ================= 00:01:38.987 Libraries Enabled 00:01:38.987 ================= 00:01:38.987 00:01:38.987 libs: 00:01:38.987 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:38.987 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:38.987 cryptodev, dmadev, power, reorder, security, vhost, 00:01:38.987 00:01:38.987 Message: 00:01:38.987 =============== 00:01:38.987 Drivers Enabled 00:01:38.987 =============== 00:01:38.987 00:01:38.987 common: 00:01:38.987 mlx5, qat, 00:01:38.987 bus: 00:01:38.987 auxiliary, pci, vdev, 00:01:38.987 mempool: 00:01:38.987 ring, 00:01:38.987 dma: 00:01:38.987 00:01:38.987 net: 00:01:38.987 00:01:38.988 crypto: 00:01:38.988 ipsec_mb, mlx5, 00:01:38.988 compress: 00:01:38.988 isal, mlx5, 00:01:38.988 vdpa: 00:01:38.988 00:01:38.988 00:01:38.988 Message: 00:01:38.988 ================= 00:01:38.988 Content Skipped 00:01:38.988 ================= 00:01:38.988 00:01:38.988 apps: 00:01:38.988 dumpcap: explicitly disabled via build config 00:01:38.988 graph: explicitly disabled via build config 00:01:38.988 pdump: explicitly disabled via build config 00:01:38.988 proc-info: explicitly disabled via build config 00:01:38.988 test-acl: explicitly disabled via build config 00:01:38.988 test-bbdev: explicitly disabled via build config 00:01:38.988 test-cmdline: explicitly disabled via build config 00:01:38.988 test-compress-perf: explicitly disabled via build config 00:01:38.988 test-crypto-perf: explicitly disabled via build config 00:01:38.988 test-dma-perf: explicitly disabled via build config 00:01:38.988 test-eventdev: explicitly disabled via build config 00:01:38.988 test-fib: explicitly disabled via build config 00:01:38.988 test-flow-perf: explicitly disabled via build config 00:01:38.988 test-gpudev: explicitly disabled via build config 00:01:38.988 test-mldev: explicitly disabled via build config 00:01:38.988 test-pipeline: explicitly disabled via build config 00:01:38.988 test-pmd: explicitly disabled via build config 00:01:38.988 test-regex: explicitly disabled via build config 00:01:38.988 test-sad: explicitly disabled via build config 00:01:38.988 test-security-perf: explicitly disabled via build config 00:01:38.988 00:01:38.988 libs: 00:01:38.988 argparse: explicitly disabled via build config 00:01:38.988 metrics: explicitly disabled via build config 00:01:38.988 acl: explicitly disabled via build config 00:01:38.988 bbdev: explicitly disabled via build config 00:01:38.988 bitratestats: explicitly disabled via build config 00:01:38.988 bpf: explicitly disabled via build config 00:01:38.988 cfgfile: explicitly disabled via build config 00:01:38.988 distributor: explicitly disabled via build config 00:01:38.988 efd: explicitly disabled via build config 00:01:38.988 eventdev: explicitly disabled via build config 00:01:38.988 dispatcher: explicitly disabled via build config 00:01:38.988 gpudev: explicitly disabled via build config 00:01:38.988 gro: explicitly disabled via build config 00:01:38.988 gso: explicitly disabled via build config 00:01:38.988 ip_frag: explicitly disabled via build config 00:01:38.988 jobstats: explicitly disabled via build config 00:01:38.988 latencystats: explicitly disabled via build config 00:01:38.988 lpm: explicitly disabled via build config 00:01:38.988 member: explicitly disabled via build config 00:01:38.988 pcapng: explicitly disabled via build config 00:01:38.988 rawdev: explicitly disabled via build config 00:01:38.988 regexdev: explicitly disabled via build config 00:01:38.988 mldev: explicitly disabled via build config 00:01:38.988 rib: explicitly disabled via build config 00:01:38.988 sched: explicitly disabled via build config 00:01:38.988 stack: explicitly disabled via build config 00:01:38.988 ipsec: explicitly disabled via build config 00:01:38.988 pdcp: explicitly disabled via build config 00:01:38.988 fib: explicitly disabled via build config 00:01:38.988 port: explicitly disabled via build config 00:01:38.988 pdump: explicitly disabled via build config 00:01:38.988 table: explicitly disabled via build config 00:01:38.988 pipeline: explicitly disabled via build config 00:01:38.988 graph: explicitly disabled via build config 00:01:38.988 node: explicitly disabled via build config 00:01:38.988 00:01:38.988 drivers: 00:01:38.988 common/cpt: not in enabled drivers build config 00:01:38.988 common/dpaax: not in enabled drivers build config 00:01:38.988 common/iavf: not in enabled drivers build config 00:01:38.988 common/idpf: not in enabled drivers build config 00:01:38.988 common/ionic: not in enabled drivers build config 00:01:38.988 common/mvep: not in enabled drivers build config 00:01:38.988 common/octeontx: not in enabled drivers build config 00:01:38.988 bus/cdx: not in enabled drivers build config 00:01:38.988 bus/dpaa: not in enabled drivers build config 00:01:38.988 bus/fslmc: not in enabled drivers build config 00:01:38.988 bus/ifpga: not in enabled drivers build config 00:01:38.988 bus/platform: not in enabled drivers build config 00:01:38.988 bus/uacce: not in enabled drivers build config 00:01:38.988 bus/vmbus: not in enabled drivers build config 00:01:38.988 common/cnxk: not in enabled drivers build config 00:01:38.988 common/nfp: not in enabled drivers build config 00:01:38.988 common/nitrox: not in enabled drivers build config 00:01:38.988 common/sfc_efx: not in enabled drivers build config 00:01:38.988 mempool/bucket: not in enabled drivers build config 00:01:38.988 mempool/cnxk: not in enabled drivers build config 00:01:38.988 mempool/dpaa: not in enabled drivers build config 00:01:38.988 mempool/dpaa2: not in enabled drivers build config 00:01:38.988 mempool/octeontx: not in enabled drivers build config 00:01:38.988 mempool/stack: not in enabled drivers build config 00:01:38.988 dma/cnxk: not in enabled drivers build config 00:01:38.988 dma/dpaa: not in enabled drivers build config 00:01:38.988 dma/dpaa2: not in enabled drivers build config 00:01:38.988 dma/hisilicon: not in enabled drivers build config 00:01:38.988 dma/idxd: not in enabled drivers build config 00:01:38.988 dma/ioat: not in enabled drivers build config 00:01:38.988 dma/skeleton: not in enabled drivers build config 00:01:38.988 net/af_packet: not in enabled drivers build config 00:01:38.988 net/af_xdp: not in enabled drivers build config 00:01:38.988 net/ark: not in enabled drivers build config 00:01:38.988 net/atlantic: not in enabled drivers build config 00:01:38.988 net/avp: not in enabled drivers build config 00:01:38.988 net/axgbe: not in enabled drivers build config 00:01:38.988 net/bnx2x: not in enabled drivers build config 00:01:38.988 net/bnxt: not in enabled drivers build config 00:01:38.988 net/bonding: not in enabled drivers build config 00:01:38.988 net/cnxk: not in enabled drivers build config 00:01:38.988 net/cpfl: not in enabled drivers build config 00:01:38.988 net/cxgbe: not in enabled drivers build config 00:01:38.988 net/dpaa: not in enabled drivers build config 00:01:38.988 net/dpaa2: not in enabled drivers build config 00:01:38.988 net/e1000: not in enabled drivers build config 00:01:38.988 net/ena: not in enabled drivers build config 00:01:38.988 net/enetc: not in enabled drivers build config 00:01:38.988 net/enetfec: not in enabled drivers build config 00:01:38.988 net/enic: not in enabled drivers build config 00:01:38.988 net/failsafe: not in enabled drivers build config 00:01:38.988 net/fm10k: not in enabled drivers build config 00:01:38.988 net/gve: not in enabled drivers build config 00:01:38.988 net/hinic: not in enabled drivers build config 00:01:38.988 net/hns3: not in enabled drivers build config 00:01:38.988 net/i40e: not in enabled drivers build config 00:01:38.988 net/iavf: not in enabled drivers build config 00:01:38.988 net/ice: not in enabled drivers build config 00:01:38.988 net/idpf: not in enabled drivers build config 00:01:38.988 net/igc: not in enabled drivers build config 00:01:38.988 net/ionic: not in enabled drivers build config 00:01:38.988 net/ipn3ke: not in enabled drivers build config 00:01:38.988 net/ixgbe: not in enabled drivers build config 00:01:38.988 net/mana: not in enabled drivers build config 00:01:38.988 net/memif: not in enabled drivers build config 00:01:38.988 net/mlx4: not in enabled drivers build config 00:01:38.988 net/mlx5: not in enabled drivers build config 00:01:38.988 net/mvneta: not in enabled drivers build config 00:01:38.988 net/mvpp2: not in enabled drivers build config 00:01:38.988 net/netvsc: not in enabled drivers build config 00:01:38.988 net/nfb: not in enabled drivers build config 00:01:38.988 net/nfp: not in enabled drivers build config 00:01:38.988 net/ngbe: not in enabled drivers build config 00:01:38.988 net/null: not in enabled drivers build config 00:01:38.988 net/octeontx: not in enabled drivers build config 00:01:38.988 net/octeon_ep: not in enabled drivers build config 00:01:38.988 net/pcap: not in enabled drivers build config 00:01:38.988 net/pfe: not in enabled drivers build config 00:01:38.988 net/qede: not in enabled drivers build config 00:01:38.988 net/ring: not in enabled drivers build config 00:01:38.988 net/sfc: not in enabled drivers build config 00:01:38.988 net/softnic: not in enabled drivers build config 00:01:38.988 net/tap: not in enabled drivers build config 00:01:38.988 net/thunderx: not in enabled drivers build config 00:01:38.988 net/txgbe: not in enabled drivers build config 00:01:38.988 net/vdev_netvsc: not in enabled drivers build config 00:01:38.988 net/vhost: not in enabled drivers build config 00:01:38.988 net/virtio: not in enabled drivers build config 00:01:38.988 net/vmxnet3: not in enabled drivers build config 00:01:38.988 raw/*: missing internal dependency, "rawdev" 00:01:38.988 crypto/armv8: not in enabled drivers build config 00:01:38.988 crypto/bcmfs: not in enabled drivers build config 00:01:38.988 crypto/caam_jr: not in enabled drivers build config 00:01:38.988 crypto/ccp: not in enabled drivers build config 00:01:38.988 crypto/cnxk: not in enabled drivers build config 00:01:38.988 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.988 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.988 crypto/mvsam: not in enabled drivers build config 00:01:38.988 crypto/nitrox: not in enabled drivers build config 00:01:38.988 crypto/null: not in enabled drivers build config 00:01:38.988 crypto/octeontx: not in enabled drivers build config 00:01:38.988 crypto/openssl: not in enabled drivers build config 00:01:38.988 crypto/scheduler: not in enabled drivers build config 00:01:38.988 crypto/uadk: not in enabled drivers build config 00:01:38.988 crypto/virtio: not in enabled drivers build config 00:01:38.988 compress/nitrox: not in enabled drivers build config 00:01:38.988 compress/octeontx: not in enabled drivers build config 00:01:38.988 compress/zlib: not in enabled drivers build config 00:01:38.988 regex/*: missing internal dependency, "regexdev" 00:01:38.988 ml/*: missing internal dependency, "mldev" 00:01:38.988 vdpa/ifc: not in enabled drivers build config 00:01:38.988 vdpa/mlx5: not in enabled drivers build config 00:01:38.988 vdpa/nfp: not in enabled drivers build config 00:01:38.988 vdpa/sfc: not in enabled drivers build config 00:01:38.988 event/*: missing internal dependency, "eventdev" 00:01:38.988 baseband/*: missing internal dependency, "bbdev" 00:01:38.988 gpu/*: missing internal dependency, "gpudev" 00:01:38.988 00:01:38.988 00:01:38.988 Build targets in project: 115 00:01:38.988 00:01:38.988 DPDK 24.03.0 00:01:38.988 00:01:38.988 User defined options 00:01:38.988 buildtype : debug 00:01:38.988 default_library : shared 00:01:38.988 libdir : lib 00:01:38.988 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:38.988 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:38.989 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:38.989 cpu_instruction_set: native 00:01:38.989 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:38.989 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:38.989 enable_docs : false 00:01:38.989 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:38.989 enable_kmods : false 00:01:38.989 max_lcores : 128 00:01:38.989 tests : false 00:01:38.989 00:01:38.989 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.257 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.257 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.257 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.257 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.257 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.257 [5/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.257 [6/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.528 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.528 [8/378] Linking static target lib/librte_kvargs.a 00:01:39.528 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.528 [10/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.528 [11/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.528 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.528 [13/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.528 [14/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.528 [15/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.528 [16/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.528 [17/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.528 [18/378] Linking static target lib/librte_log.a 00:01:39.528 [19/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.528 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.528 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.528 [22/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.528 [23/378] Linking static target lib/librte_pci.a 00:01:39.528 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.528 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.528 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.528 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.528 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.528 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.528 [30/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:39.528 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.789 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.789 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.789 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.789 [35/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.789 [36/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.789 [37/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.789 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.789 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.789 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.789 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.789 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.789 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.789 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.789 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.789 [46/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.789 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.789 [48/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.789 [49/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.051 [50/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.051 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.051 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.051 [53/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.051 [54/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.051 [55/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.051 [56/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.051 [57/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.051 [58/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.051 [59/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.051 [60/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.051 [61/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.051 [62/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.051 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.051 [64/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.051 [65/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.051 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.051 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.051 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.051 [69/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.051 [70/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.051 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.051 [72/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.051 [73/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.051 [74/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.051 [75/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.051 [76/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.051 [77/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.051 [78/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.051 [79/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.051 [80/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.051 [81/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.051 [82/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.051 [83/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.051 [84/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.051 [85/378] Linking static target lib/librte_meter.a 00:01:40.051 [86/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.051 [87/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.051 [88/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.051 [89/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.051 [90/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.051 [91/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.051 [92/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.051 [93/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.051 [94/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.051 [95/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.051 [96/378] Linking static target lib/librte_telemetry.a 00:01:40.051 [97/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.051 [98/378] Linking static target lib/librte_ring.a 00:01:40.051 [99/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.051 [100/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.051 [101/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.051 [102/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.051 [103/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.051 [104/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.051 [105/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.051 [106/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.051 [107/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.051 [108/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.051 [109/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.051 [110/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:40.051 [111/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.051 [112/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.051 [113/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.051 [114/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.051 [115/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.051 [116/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.051 [117/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.051 [118/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:40.051 [119/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.051 [120/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.051 [121/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.051 [122/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.312 [123/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.312 [124/378] Linking static target lib/librte_cmdline.a 00:01:40.312 [125/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.312 [126/378] Linking static target lib/librte_net.a 00:01:40.312 [127/378] Linking static target lib/librte_timer.a 00:01:40.312 [128/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.312 [129/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.312 [130/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.312 [131/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.312 [132/378] Linking static target lib/librte_rcu.a 00:01:40.312 [133/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.312 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.312 [135/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.312 [136/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.312 [137/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.312 [138/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.312 [139/378] Linking static target lib/librte_mempool.a 00:01:40.312 [140/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.312 [141/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.312 [142/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.312 [143/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.312 [144/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.312 [145/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.312 [146/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.312 [147/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.312 [148/378] Linking static target lib/librte_compressdev.a 00:01:40.312 [149/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.312 [150/378] Linking static target lib/librte_eal.a 00:01:40.312 [151/378] Linking static target lib/librte_dmadev.a 00:01:40.312 [152/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.312 [153/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.312 [154/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.312 [155/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:40.312 [156/378] Linking static target lib/librte_mbuf.a 00:01:40.571 [157/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.571 [158/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.571 [159/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.571 [160/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.571 [161/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.571 [162/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.571 [163/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.571 [164/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.571 [165/378] Linking target lib/librte_log.so.24.1 00:01:40.571 [166/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.571 [167/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:40.571 [168/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.571 [169/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.571 [170/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.571 [171/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.571 [172/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.571 [173/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.571 [174/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:40.571 [175/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.571 [176/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:40.571 [177/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.571 [178/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.571 [179/378] Linking static target lib/librte_power.a 00:01:40.571 [180/378] Linking static target lib/librte_hash.a 00:01:40.571 [181/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.831 [182/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.831 [183/378] Linking static target lib/librte_security.a 00:01:40.831 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:40.831 [185/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.831 [186/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.831 [187/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.831 [188/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.831 [189/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.831 [190/378] Linking static target lib/librte_reorder.a 00:01:40.831 [191/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.831 [192/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:40.831 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:40.831 [194/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:40.831 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:40.831 [196/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:40.831 [197/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:40.831 [198/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:40.831 [199/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.831 [200/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:40.831 [201/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.831 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:40.831 [203/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:40.831 [204/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:40.831 [205/378] Linking target lib/librte_telemetry.so.24.1 00:01:40.831 [206/378] Linking target lib/librte_kvargs.so.24.1 00:01:40.831 [207/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:40.831 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:40.831 [209/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.831 [210/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:40.831 [211/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:40.831 [212/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.831 [213/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.831 [214/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:40.831 [215/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:40.831 [216/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:40.831 [217/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:40.831 [218/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:40.831 [219/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:40.831 [220/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:40.831 [221/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.831 [222/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:40.831 [223/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:40.831 [224/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:40.831 [225/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:40.831 [226/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:40.831 [227/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:40.831 [228/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.831 [229/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.831 [230/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.831 [231/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.831 [232/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:40.831 [233/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:40.831 [234/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.831 [235/378] Linking static target drivers/librte_bus_vdev.a 00:01:40.831 [236/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:40.831 [237/378] Linking static target lib/librte_cryptodev.a 00:01:40.831 [238/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.831 [239/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:40.831 [240/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:40.831 [241/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:40.831 [242/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.831 [243/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:40.831 [244/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.831 [245/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:40.831 [246/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:40.832 [247/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.832 [248/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:41.090 [249/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.090 [250/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:41.090 [251/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:41.090 [252/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.090 [253/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:41.090 [254/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:41.090 [255/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.090 [256/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:41.090 [257/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:41.090 [258/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:41.090 [259/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.090 [260/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:41.090 [261/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.090 [262/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:41.090 [263/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:41.090 [264/378] Linking static target drivers/librte_bus_pci.a 00:01:41.090 [265/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:41.090 [266/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:41.090 [267/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:41.090 [268/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:41.090 [269/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.090 [270/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.090 [271/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:41.090 [272/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.090 [273/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.090 [274/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.090 [275/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.090 [276/378] Linking static target drivers/librte_mempool_ring.a 00:01:41.090 [277/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.090 [278/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:41.090 [279/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:41.090 [280/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:41.090 [281/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:41.090 [282/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:41.091 [283/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.349 [284/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:41.349 [285/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:41.349 [286/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.349 [287/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:41.349 [288/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:41.349 [289/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.349 [290/378] Linking static target drivers/librte_compress_isal.a 00:01:41.349 [291/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.349 [292/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:41.349 [293/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:41.349 [294/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:41.349 [295/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:41.349 [296/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:41.349 [297/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:41.349 [298/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:41.349 [299/378] Linking static target drivers/librte_compress_mlx5.a 00:01:41.349 [300/378] Linking static target lib/librte_ethdev.a 00:01:41.349 [301/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:41.349 [302/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:41.349 [303/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:41.349 [304/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:41.349 [305/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.609 [306/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.609 [307/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:41.609 [308/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:41.609 [309/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.609 [310/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:41.609 [311/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.609 [312/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:41.609 [313/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:41.609 [314/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:41.609 [315/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:41.609 [316/378] Linking static target drivers/librte_common_mlx5.a 00:01:41.868 [317/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:41.868 [318/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:41.868 [319/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.127 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:42.127 [321/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:42.127 [322/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:42.127 [323/378] Linking static target drivers/librte_common_qat.a 00:01:42.697 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:42.697 [325/378] Linking static target lib/librte_vhost.a 00:01:42.956 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.862 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.399 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.587 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.997 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.997 [331/378] Linking target lib/librte_eal.so.24.1 00:01:52.997 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.997 [333/378] Linking target lib/librte_pci.so.24.1 00:01:52.997 [334/378] Linking target lib/librte_timer.so.24.1 00:01:52.997 [335/378] Linking target lib/librte_dmadev.so.24.1 00:01:52.997 [336/378] Linking target lib/librte_ring.so.24.1 00:01:52.997 [337/378] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.997 [338/378] Linking target lib/librte_meter.so.24.1 00:01:52.997 [339/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:01:53.256 [340/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:53.256 [341/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:53.256 [342/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:53.256 [343/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:53.256 [344/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:53.256 [345/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:01:53.256 [346/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:01:53.256 [347/378] Linking target lib/librte_rcu.so.24.1 00:01:53.256 [348/378] Linking target drivers/librte_bus_pci.so.24.1 00:01:53.256 [349/378] Linking target lib/librte_mempool.so.24.1 00:01:53.515 [350/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.515 [351/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.515 [352/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:01:53.515 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.515 [354/378] Linking target lib/librte_mbuf.so.24.1 00:01:53.515 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.515 [356/378] Linking target lib/librte_cryptodev.so.24.1 00:01:53.515 [357/378] Linking target lib/librte_net.so.24.1 00:01:53.515 [358/378] Linking target lib/librte_compressdev.so.24.1 00:01:53.515 [359/378] Linking target lib/librte_reorder.so.24.1 00:01:53.774 [360/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.774 [361/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.774 [362/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:01:53.774 [363/378] Linking target lib/librte_cmdline.so.24.1 00:01:53.774 [364/378] Linking target lib/librte_security.so.24.1 00:01:53.774 [365/378] Linking target lib/librte_hash.so.24.1 00:01:53.774 [366/378] Linking target lib/librte_ethdev.so.24.1 00:01:53.774 [367/378] Linking target drivers/librte_compress_isal.so.24.1 00:01:54.032 [368/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:01:54.032 [369/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:54.032 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:54.032 [371/378] Linking target lib/librte_power.so.24.1 00:01:54.032 [372/378] Linking target drivers/librte_common_mlx5.so.24.1 00:01:54.032 [373/378] Linking target lib/librte_vhost.so.24.1 00:01:54.032 [374/378] Linking target drivers/librte_common_qat.so.24.1 00:01:54.033 [375/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:01:54.291 [376/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:01:54.291 [377/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:01:54.291 [378/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:01:54.291 INFO: autodetecting backend as ninja 00:01:54.291 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:55.230 CC lib/log/log.o 00:01:55.230 CC lib/log/log_flags.o 00:01:55.230 CC lib/log/log_deprecated.o 00:01:55.488 CC lib/ut_mock/mock.o 00:01:55.488 CC lib/ut/ut.o 00:01:55.488 LIB libspdk_log.a 00:01:55.488 LIB libspdk_ut_mock.a 00:01:55.488 LIB libspdk_ut.a 00:01:55.488 SO libspdk_log.so.7.0 00:01:55.488 SO libspdk_ut_mock.so.6.0 00:01:55.488 SO libspdk_ut.so.2.0 00:01:55.747 SYMLINK libspdk_log.so 00:01:55.747 SYMLINK libspdk_ut_mock.so 00:01:55.747 SYMLINK libspdk_ut.so 00:01:56.005 CC lib/util/base64.o 00:01:56.005 CC lib/util/bit_array.o 00:01:56.005 CC lib/util/cpuset.o 00:01:56.005 CC lib/util/crc16.o 00:01:56.005 CC lib/util/crc32c.o 00:01:56.005 CC lib/util/crc32.o 00:01:56.005 CC lib/util/crc32_ieee.o 00:01:56.005 CC lib/util/crc64.o 00:01:56.005 CC lib/util/dif.o 00:01:56.005 CC lib/util/fd.o 00:01:56.005 CC lib/ioat/ioat.o 00:01:56.005 CC lib/dma/dma.o 00:01:56.005 CC lib/util/file.o 00:01:56.005 CC lib/util/hexlify.o 00:01:56.005 CC lib/util/iov.o 00:01:56.005 CC lib/util/math.o 00:01:56.005 CC lib/util/pipe.o 00:01:56.005 CC lib/util/uuid.o 00:01:56.005 CC lib/util/strerror_tls.o 00:01:56.005 CC lib/util/string.o 00:01:56.005 CC lib/util/xor.o 00:01:56.005 CC lib/util/fd_group.o 00:01:56.005 CC lib/util/zipf.o 00:01:56.005 CXX lib/trace_parser/trace.o 00:01:56.262 CC lib/vfio_user/host/vfio_user.o 00:01:56.262 CC lib/vfio_user/host/vfio_user_pci.o 00:01:56.262 LIB libspdk_dma.a 00:01:56.262 SO libspdk_dma.so.4.0 00:01:56.262 LIB libspdk_ioat.a 00:01:56.262 SO libspdk_ioat.so.7.0 00:01:56.262 SYMLINK libspdk_dma.so 00:01:56.262 SYMLINK libspdk_ioat.so 00:01:56.262 LIB libspdk_vfio_user.a 00:01:56.520 LIB libspdk_util.a 00:01:56.520 SO libspdk_vfio_user.so.5.0 00:01:56.520 SO libspdk_util.so.9.1 00:01:56.520 SYMLINK libspdk_vfio_user.so 00:01:56.520 SYMLINK libspdk_util.so 00:01:56.778 LIB libspdk_trace_parser.a 00:01:56.778 SO libspdk_trace_parser.so.5.0 00:01:56.778 SYMLINK libspdk_trace_parser.so 00:01:57.036 CC lib/rdma_provider/common.o 00:01:57.036 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:57.036 CC lib/env_dpdk/pci.o 00:01:57.036 CC lib/json/json_parse.o 00:01:57.036 CC lib/env_dpdk/env.o 00:01:57.036 CC lib/env_dpdk/init.o 00:01:57.036 CC lib/json/json_util.o 00:01:57.036 CC lib/env_dpdk/memory.o 00:01:57.036 CC lib/json/json_write.o 00:01:57.036 CC lib/env_dpdk/threads.o 00:01:57.036 CC lib/env_dpdk/pci_ioat.o 00:01:57.036 CC lib/idxd/idxd.o 00:01:57.036 CC lib/env_dpdk/pci_virtio.o 00:01:57.036 CC lib/env_dpdk/pci_idxd.o 00:01:57.036 CC lib/env_dpdk/pci_vmd.o 00:01:57.036 CC lib/idxd/idxd_user.o 00:01:57.036 CC lib/env_dpdk/sigbus_handler.o 00:01:57.036 CC lib/env_dpdk/pci_event.o 00:01:57.036 CC lib/idxd/idxd_kernel.o 00:01:57.036 CC lib/env_dpdk/pci_dpdk.o 00:01:57.036 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:57.036 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:57.036 CC lib/vmd/vmd.o 00:01:57.036 CC lib/vmd/led.o 00:01:57.036 CC lib/rdma_utils/rdma_utils.o 00:01:57.036 CC lib/conf/conf.o 00:01:57.036 CC lib/reduce/reduce.o 00:01:57.294 LIB libspdk_rdma_provider.a 00:01:57.294 SO libspdk_rdma_provider.so.6.0 00:01:57.294 LIB libspdk_conf.a 00:01:57.294 SYMLINK libspdk_rdma_provider.so 00:01:57.294 LIB libspdk_rdma_utils.a 00:01:57.294 LIB libspdk_json.a 00:01:57.294 SO libspdk_conf.so.6.0 00:01:57.294 SO libspdk_rdma_utils.so.1.0 00:01:57.294 SO libspdk_json.so.6.0 00:01:57.294 SYMLINK libspdk_conf.so 00:01:57.294 SYMLINK libspdk_rdma_utils.so 00:01:57.294 SYMLINK libspdk_json.so 00:01:57.552 LIB libspdk_idxd.a 00:01:57.552 SO libspdk_idxd.so.12.0 00:01:57.552 LIB libspdk_reduce.a 00:01:57.552 LIB libspdk_vmd.a 00:01:57.552 SO libspdk_reduce.so.6.0 00:01:57.552 SO libspdk_vmd.so.6.0 00:01:57.552 SYMLINK libspdk_idxd.so 00:01:57.552 SYMLINK libspdk_reduce.so 00:01:57.552 SYMLINK libspdk_vmd.so 00:01:57.811 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:57.811 CC lib/jsonrpc/jsonrpc_server.o 00:01:57.811 CC lib/jsonrpc/jsonrpc_client.o 00:01:57.811 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:58.069 LIB libspdk_jsonrpc.a 00:01:58.069 SO libspdk_jsonrpc.so.6.0 00:01:58.069 LIB libspdk_env_dpdk.a 00:01:58.069 SYMLINK libspdk_jsonrpc.so 00:01:58.069 SO libspdk_env_dpdk.so.14.1 00:01:58.328 SYMLINK libspdk_env_dpdk.so 00:01:58.586 CC lib/rpc/rpc.o 00:01:58.586 LIB libspdk_rpc.a 00:01:58.586 SO libspdk_rpc.so.6.0 00:01:58.844 SYMLINK libspdk_rpc.so 00:01:59.102 CC lib/keyring/keyring.o 00:01:59.102 CC lib/keyring/keyring_rpc.o 00:01:59.102 CC lib/trace/trace_flags.o 00:01:59.102 CC lib/trace/trace.o 00:01:59.102 CC lib/trace/trace_rpc.o 00:01:59.102 CC lib/notify/notify_rpc.o 00:01:59.102 CC lib/notify/notify.o 00:01:59.360 LIB libspdk_notify.a 00:01:59.360 LIB libspdk_keyring.a 00:01:59.360 SO libspdk_notify.so.6.0 00:01:59.360 LIB libspdk_trace.a 00:01:59.360 SO libspdk_keyring.so.1.0 00:01:59.360 SO libspdk_trace.so.10.0 00:01:59.360 SYMLINK libspdk_notify.so 00:01:59.360 SYMLINK libspdk_keyring.so 00:01:59.360 SYMLINK libspdk_trace.so 00:01:59.925 CC lib/thread/thread.o 00:01:59.925 CC lib/thread/iobuf.o 00:01:59.925 CC lib/sock/sock.o 00:01:59.925 CC lib/sock/sock_rpc.o 00:02:00.183 LIB libspdk_sock.a 00:02:00.183 SO libspdk_sock.so.10.0 00:02:00.183 SYMLINK libspdk_sock.so 00:02:00.749 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:00.749 CC lib/nvme/nvme_ctrlr.o 00:02:00.749 CC lib/nvme/nvme_ns_cmd.o 00:02:00.749 CC lib/nvme/nvme_fabric.o 00:02:00.749 CC lib/nvme/nvme_ns.o 00:02:00.749 CC lib/nvme/nvme_pcie_common.o 00:02:00.749 CC lib/nvme/nvme_pcie.o 00:02:00.749 CC lib/nvme/nvme_qpair.o 00:02:00.749 CC lib/nvme/nvme.o 00:02:00.749 CC lib/nvme/nvme_quirks.o 00:02:00.749 CC lib/nvme/nvme_transport.o 00:02:00.749 CC lib/nvme/nvme_discovery.o 00:02:00.749 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:00.749 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:00.749 CC lib/nvme/nvme_tcp.o 00:02:00.749 CC lib/nvme/nvme_opal.o 00:02:00.749 CC lib/nvme/nvme_io_msg.o 00:02:00.749 CC lib/nvme/nvme_poll_group.o 00:02:00.749 CC lib/nvme/nvme_zns.o 00:02:00.749 CC lib/nvme/nvme_stubs.o 00:02:00.749 CC lib/nvme/nvme_auth.o 00:02:00.749 CC lib/nvme/nvme_cuse.o 00:02:00.749 CC lib/nvme/nvme_rdma.o 00:02:00.749 LIB libspdk_thread.a 00:02:01.006 SO libspdk_thread.so.10.1 00:02:01.006 SYMLINK libspdk_thread.so 00:02:01.263 CC lib/init/subsystem.o 00:02:01.263 CC lib/accel/accel.o 00:02:01.263 CC lib/init/json_config.o 00:02:01.263 CC lib/accel/accel_rpc.o 00:02:01.263 CC lib/init/subsystem_rpc.o 00:02:01.263 CC lib/init/rpc.o 00:02:01.263 CC lib/accel/accel_sw.o 00:02:01.263 CC lib/virtio/virtio_vhost_user.o 00:02:01.263 CC lib/virtio/virtio.o 00:02:01.263 CC lib/virtio/virtio_vfio_user.o 00:02:01.263 CC lib/virtio/virtio_pci.o 00:02:01.263 CC lib/blob/blobstore.o 00:02:01.263 CC lib/blob/request.o 00:02:01.263 CC lib/blob/zeroes.o 00:02:01.263 CC lib/blob/blob_bs_dev.o 00:02:01.520 LIB libspdk_init.a 00:02:01.520 SO libspdk_init.so.5.0 00:02:01.520 LIB libspdk_virtio.a 00:02:01.520 SO libspdk_virtio.so.7.0 00:02:01.520 SYMLINK libspdk_init.so 00:02:01.777 SYMLINK libspdk_virtio.so 00:02:02.034 CC lib/event/app.o 00:02:02.034 CC lib/event/log_rpc.o 00:02:02.034 CC lib/event/reactor.o 00:02:02.034 CC lib/event/app_rpc.o 00:02:02.034 CC lib/event/scheduler_static.o 00:02:02.034 LIB libspdk_accel.a 00:02:02.034 SO libspdk_accel.so.15.1 00:02:02.034 SYMLINK libspdk_accel.so 00:02:02.034 LIB libspdk_nvme.a 00:02:02.291 SO libspdk_nvme.so.13.1 00:02:02.291 LIB libspdk_event.a 00:02:02.291 SO libspdk_event.so.14.0 00:02:02.549 SYMLINK libspdk_event.so 00:02:02.549 CC lib/bdev/bdev.o 00:02:02.549 CC lib/bdev/bdev_rpc.o 00:02:02.549 CC lib/bdev/bdev_zone.o 00:02:02.549 CC lib/bdev/part.o 00:02:02.549 CC lib/bdev/scsi_nvme.o 00:02:02.549 SYMLINK libspdk_nvme.so 00:02:03.494 LIB libspdk_blob.a 00:02:03.494 SO libspdk_blob.so.11.0 00:02:03.494 SYMLINK libspdk_blob.so 00:02:03.751 CC lib/lvol/lvol.o 00:02:03.751 CC lib/blobfs/blobfs.o 00:02:03.751 CC lib/blobfs/tree.o 00:02:04.315 LIB libspdk_bdev.a 00:02:04.315 SO libspdk_bdev.so.15.1 00:02:04.315 SYMLINK libspdk_bdev.so 00:02:04.315 LIB libspdk_blobfs.a 00:02:04.571 SO libspdk_blobfs.so.10.0 00:02:04.571 LIB libspdk_lvol.a 00:02:04.571 SYMLINK libspdk_blobfs.so 00:02:04.571 SO libspdk_lvol.so.10.0 00:02:04.571 SYMLINK libspdk_lvol.so 00:02:04.832 CC lib/nbd/nbd.o 00:02:04.832 CC lib/nbd/nbd_rpc.o 00:02:04.832 CC lib/scsi/dev.o 00:02:04.832 CC lib/scsi/lun.o 00:02:04.832 CC lib/scsi/port.o 00:02:04.832 CC lib/scsi/scsi.o 00:02:04.832 CC lib/scsi/scsi_bdev.o 00:02:04.832 CC lib/scsi/scsi_pr.o 00:02:04.832 CC lib/scsi/scsi_rpc.o 00:02:04.832 CC lib/scsi/task.o 00:02:04.832 CC lib/ftl/ftl_core.o 00:02:04.832 CC lib/ftl/ftl_layout.o 00:02:04.832 CC lib/ftl/ftl_init.o 00:02:04.832 CC lib/ublk/ublk.o 00:02:04.832 CC lib/ftl/ftl_io.o 00:02:04.832 CC lib/ublk/ublk_rpc.o 00:02:04.832 CC lib/ftl/ftl_debug.o 00:02:04.832 CC lib/ftl/ftl_sb.o 00:02:04.832 CC lib/ftl/ftl_l2p_flat.o 00:02:04.832 CC lib/ftl/ftl_l2p.o 00:02:04.832 CC lib/nvmf/ctrlr.o 00:02:04.832 CC lib/ftl/ftl_nv_cache.o 00:02:04.832 CC lib/ftl/ftl_band_ops.o 00:02:04.832 CC lib/ftl/ftl_writer.o 00:02:04.832 CC lib/ftl/ftl_band.o 00:02:04.832 CC lib/nvmf/ctrlr_bdev.o 00:02:04.832 CC lib/nvmf/ctrlr_discovery.o 00:02:04.832 CC lib/nvmf/subsystem.o 00:02:04.832 CC lib/ftl/ftl_rq.o 00:02:04.832 CC lib/ftl/ftl_reloc.o 00:02:04.832 CC lib/nvmf/nvmf.o 00:02:04.832 CC lib/nvmf/nvmf_rpc.o 00:02:04.832 CC lib/ftl/ftl_l2p_cache.o 00:02:04.832 CC lib/nvmf/transport.o 00:02:04.832 CC lib/ftl/ftl_p2l.o 00:02:04.832 CC lib/nvmf/tcp.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.832 CC lib/nvmf/stubs.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.832 CC lib/nvmf/mdns_server.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.832 CC lib/nvmf/rdma.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.832 CC lib/nvmf/auth.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.832 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.832 CC lib/ftl/utils/ftl_conf.o 00:02:04.832 CC lib/ftl/utils/ftl_mempool.o 00:02:04.832 CC lib/ftl/utils/ftl_md.o 00:02:04.832 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.832 CC lib/ftl/utils/ftl_property.o 00:02:04.832 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.832 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.832 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.833 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.833 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.833 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.833 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.833 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.833 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.833 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.833 CC lib/ftl/base/ftl_base_dev.o 00:02:04.833 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.833 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.833 CC lib/ftl/ftl_trace.o 00:02:05.091 LIB libspdk_nbd.a 00:02:05.348 SO libspdk_nbd.so.7.0 00:02:05.348 LIB libspdk_scsi.a 00:02:05.348 SYMLINK libspdk_nbd.so 00:02:05.348 SO libspdk_scsi.so.9.0 00:02:05.348 LIB libspdk_ublk.a 00:02:05.348 SO libspdk_ublk.so.3.0 00:02:05.348 SYMLINK libspdk_scsi.so 00:02:05.605 SYMLINK libspdk_ublk.so 00:02:05.862 CC lib/iscsi/conn.o 00:02:05.862 CC lib/iscsi/init_grp.o 00:02:05.862 CC lib/iscsi/param.o 00:02:05.862 CC lib/iscsi/iscsi.o 00:02:05.862 CC lib/iscsi/md5.o 00:02:05.862 CC lib/iscsi/tgt_node.o 00:02:05.862 CC lib/iscsi/portal_grp.o 00:02:05.862 CC lib/vhost/vhost.o 00:02:05.862 CC lib/iscsi/iscsi_subsystem.o 00:02:05.862 CC lib/vhost/vhost_rpc.o 00:02:05.862 CC lib/iscsi/iscsi_rpc.o 00:02:05.862 CC lib/vhost/vhost_scsi.o 00:02:05.862 CC lib/iscsi/task.o 00:02:05.862 CC lib/vhost/vhost_blk.o 00:02:05.862 CC lib/vhost/rte_vhost_user.o 00:02:05.862 LIB libspdk_ftl.a 00:02:05.862 SO libspdk_ftl.so.9.0 00:02:06.125 SYMLINK libspdk_ftl.so 00:02:06.389 LIB libspdk_nvmf.a 00:02:06.389 SO libspdk_nvmf.so.18.1 00:02:06.646 LIB libspdk_vhost.a 00:02:06.646 SO libspdk_vhost.so.8.0 00:02:06.646 SYMLINK libspdk_nvmf.so 00:02:06.646 SYMLINK libspdk_vhost.so 00:02:06.646 LIB libspdk_iscsi.a 00:02:06.903 SO libspdk_iscsi.so.8.0 00:02:06.903 SYMLINK libspdk_iscsi.so 00:02:07.470 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.728 CC module/accel/dsa/accel_dsa.o 00:02:07.728 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.728 LIB libspdk_env_dpdk_rpc.a 00:02:07.728 CC module/accel/ioat/accel_ioat.o 00:02:07.728 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:07.728 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:07.728 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.728 CC module/accel/error/accel_error.o 00:02:07.728 CC module/sock/posix/posix.o 00:02:07.728 CC module/accel/error/accel_error_rpc.o 00:02:07.728 CC module/keyring/linux/keyring_rpc.o 00:02:07.728 CC module/keyring/linux/keyring.o 00:02:07.728 CC module/accel/iaa/accel_iaa.o 00:02:07.728 CC module/keyring/file/keyring.o 00:02:07.728 CC module/blob/bdev/blob_bdev.o 00:02:07.728 CC module/keyring/file/keyring_rpc.o 00:02:07.728 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.728 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.728 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:07.728 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:07.728 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.728 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.728 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.728 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.728 LIB libspdk_keyring_file.a 00:02:07.986 LIB libspdk_keyring_linux.a 00:02:07.986 LIB libspdk_accel_error.a 00:02:07.986 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.986 LIB libspdk_scheduler_gscheduler.a 00:02:07.986 LIB libspdk_accel_ioat.a 00:02:07.986 SO libspdk_keyring_file.so.1.0 00:02:07.986 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.986 LIB libspdk_accel_dsa.a 00:02:07.986 LIB libspdk_accel_iaa.a 00:02:07.986 SO libspdk_keyring_linux.so.1.0 00:02:07.986 SO libspdk_accel_error.so.2.0 00:02:07.986 LIB libspdk_scheduler_dynamic.a 00:02:07.986 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.986 SO libspdk_accel_ioat.so.6.0 00:02:07.986 LIB libspdk_blob_bdev.a 00:02:07.986 SO libspdk_accel_iaa.so.3.0 00:02:07.986 SO libspdk_accel_dsa.so.5.0 00:02:07.986 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.986 SYMLINK libspdk_keyring_file.so 00:02:07.986 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:07.986 SYMLINK libspdk_keyring_linux.so 00:02:07.986 SYMLINK libspdk_accel_error.so 00:02:07.986 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.986 SO libspdk_blob_bdev.so.11.0 00:02:07.986 SYMLINK libspdk_accel_ioat.so 00:02:07.986 SYMLINK libspdk_accel_dsa.so 00:02:07.986 SYMLINK libspdk_accel_iaa.so 00:02:07.986 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.986 SYMLINK libspdk_blob_bdev.so 00:02:08.244 LIB libspdk_sock_posix.a 00:02:08.244 SO libspdk_sock_posix.so.6.0 00:02:08.502 SYMLINK libspdk_sock_posix.so 00:02:08.502 LIB libspdk_accel_dpdk_compressdev.a 00:02:08.502 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:08.502 CC module/bdev/malloc/bdev_malloc.o 00:02:08.502 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.502 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.502 CC module/bdev/gpt/gpt.o 00:02:08.502 CC module/bdev/ftl/bdev_ftl.o 00:02:08.502 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.502 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.502 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.502 CC module/bdev/nvme/nvme_rpc.o 00:02:08.502 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.502 CC module/bdev/nvme/bdev_nvme.o 00:02:08.502 CC module/bdev/compress/vbdev_compress.o 00:02:08.502 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.502 CC module/bdev/delay/vbdev_delay.o 00:02:08.502 CC module/bdev/split/vbdev_split.o 00:02:08.502 CC module/bdev/null/bdev_null.o 00:02:08.502 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.502 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.502 CC module/bdev/null/bdev_null_rpc.o 00:02:08.502 CC module/bdev/nvme/vbdev_opal.o 00:02:08.502 CC module/bdev/aio/bdev_aio.o 00:02:08.502 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.502 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:08.502 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.502 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.502 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.502 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.502 CC module/bdev/error/vbdev_error.o 00:02:08.502 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.502 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.502 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.502 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.502 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.502 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.502 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.502 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.502 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.502 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.502 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.502 CC module/bdev/raid/bdev_raid.o 00:02:08.502 CC module/bdev/raid/raid0.o 00:02:08.502 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.502 CC module/bdev/raid/raid1.o 00:02:08.502 CC module/bdev/raid/concat.o 00:02:08.502 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:08.502 CC module/bdev/crypto/vbdev_crypto.o 00:02:08.502 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:08.760 LIB libspdk_accel_dpdk_cryptodev.a 00:02:08.760 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:08.760 LIB libspdk_blobfs_bdev.a 00:02:08.760 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:08.760 LIB libspdk_bdev_null.a 00:02:08.760 LIB libspdk_bdev_split.a 00:02:08.760 SO libspdk_blobfs_bdev.so.6.0 00:02:08.760 LIB libspdk_bdev_gpt.a 00:02:08.760 LIB libspdk_bdev_ftl.a 00:02:08.760 SO libspdk_bdev_null.so.6.0 00:02:08.760 SO libspdk_bdev_split.so.6.0 00:02:08.760 LIB libspdk_bdev_error.a 00:02:09.018 SO libspdk_bdev_gpt.so.6.0 00:02:09.018 SO libspdk_bdev_ftl.so.6.0 00:02:09.018 SYMLINK libspdk_blobfs_bdev.so 00:02:09.018 SO libspdk_bdev_error.so.6.0 00:02:09.018 LIB libspdk_bdev_zone_block.a 00:02:09.018 LIB libspdk_bdev_passthru.a 00:02:09.018 LIB libspdk_bdev_malloc.a 00:02:09.018 LIB libspdk_bdev_aio.a 00:02:09.018 SYMLINK libspdk_bdev_split.so 00:02:09.018 SYMLINK libspdk_bdev_null.so 00:02:09.018 LIB libspdk_bdev_crypto.a 00:02:09.018 SYMLINK libspdk_bdev_ftl.so 00:02:09.018 LIB libspdk_bdev_compress.a 00:02:09.018 LIB libspdk_bdev_delay.a 00:02:09.018 LIB libspdk_bdev_iscsi.a 00:02:09.018 SYMLINK libspdk_bdev_gpt.so 00:02:09.018 SO libspdk_bdev_zone_block.so.6.0 00:02:09.018 SO libspdk_bdev_passthru.so.6.0 00:02:09.018 SO libspdk_bdev_malloc.so.6.0 00:02:09.018 SO libspdk_bdev_aio.so.6.0 00:02:09.018 SYMLINK libspdk_bdev_error.so 00:02:09.018 SO libspdk_bdev_compress.so.6.0 00:02:09.018 SO libspdk_bdev_crypto.so.6.0 00:02:09.018 SO libspdk_bdev_delay.so.6.0 00:02:09.018 SO libspdk_bdev_iscsi.so.6.0 00:02:09.018 SYMLINK libspdk_bdev_zone_block.so 00:02:09.018 SYMLINK libspdk_bdev_malloc.so 00:02:09.018 SYMLINK libspdk_bdev_aio.so 00:02:09.018 SYMLINK libspdk_bdev_passthru.so 00:02:09.018 LIB libspdk_bdev_lvol.a 00:02:09.018 SYMLINK libspdk_bdev_delay.so 00:02:09.018 SYMLINK libspdk_bdev_compress.so 00:02:09.018 LIB libspdk_bdev_virtio.a 00:02:09.018 SYMLINK libspdk_bdev_crypto.so 00:02:09.018 SYMLINK libspdk_bdev_iscsi.so 00:02:09.018 SO libspdk_bdev_lvol.so.6.0 00:02:09.018 SO libspdk_bdev_virtio.so.6.0 00:02:09.276 SYMLINK libspdk_bdev_lvol.so 00:02:09.276 SYMLINK libspdk_bdev_virtio.so 00:02:09.276 LIB libspdk_bdev_raid.a 00:02:09.535 SO libspdk_bdev_raid.so.6.0 00:02:09.535 SYMLINK libspdk_bdev_raid.so 00:02:10.102 LIB libspdk_bdev_nvme.a 00:02:10.102 SO libspdk_bdev_nvme.so.7.0 00:02:10.360 SYMLINK libspdk_bdev_nvme.so 00:02:10.928 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.187 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.187 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.187 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.187 CC module/event/subsystems/sock/sock.o 00:02:11.187 CC module/event/subsystems/vmd/vmd.o 00:02:11.187 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.187 CC module/event/subsystems/keyring/keyring.o 00:02:11.187 LIB libspdk_event_vhost_blk.a 00:02:11.187 LIB libspdk_event_scheduler.a 00:02:11.187 LIB libspdk_event_keyring.a 00:02:11.187 LIB libspdk_event_iobuf.a 00:02:11.187 LIB libspdk_event_sock.a 00:02:11.187 LIB libspdk_event_vmd.a 00:02:11.187 SO libspdk_event_vhost_blk.so.3.0 00:02:11.187 SO libspdk_event_scheduler.so.4.0 00:02:11.187 SO libspdk_event_keyring.so.1.0 00:02:11.187 SO libspdk_event_sock.so.5.0 00:02:11.187 SO libspdk_event_iobuf.so.3.0 00:02:11.187 SO libspdk_event_vmd.so.6.0 00:02:11.187 SYMLINK libspdk_event_vhost_blk.so 00:02:11.187 SYMLINK libspdk_event_scheduler.so 00:02:11.187 SYMLINK libspdk_event_sock.so 00:02:11.446 SYMLINK libspdk_event_keyring.so 00:02:11.446 SYMLINK libspdk_event_vmd.so 00:02:11.446 SYMLINK libspdk_event_iobuf.so 00:02:11.705 CC module/event/subsystems/accel/accel.o 00:02:11.705 LIB libspdk_event_accel.a 00:02:11.964 SO libspdk_event_accel.so.6.0 00:02:11.964 SYMLINK libspdk_event_accel.so 00:02:12.223 CC module/event/subsystems/bdev/bdev.o 00:02:12.481 LIB libspdk_event_bdev.a 00:02:12.481 SO libspdk_event_bdev.so.6.0 00:02:12.481 SYMLINK libspdk_event_bdev.so 00:02:13.047 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:13.047 CC module/event/subsystems/ublk/ublk.o 00:02:13.047 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:13.047 CC module/event/subsystems/nbd/nbd.o 00:02:13.047 CC module/event/subsystems/scsi/scsi.o 00:02:13.047 LIB libspdk_event_ublk.a 00:02:13.047 LIB libspdk_event_nbd.a 00:02:13.047 SO libspdk_event_nbd.so.6.0 00:02:13.047 LIB libspdk_event_scsi.a 00:02:13.047 SO libspdk_event_ublk.so.3.0 00:02:13.047 LIB libspdk_event_nvmf.a 00:02:13.047 SO libspdk_event_scsi.so.6.0 00:02:13.047 SYMLINK libspdk_event_nbd.so 00:02:13.047 SO libspdk_event_nvmf.so.6.0 00:02:13.047 SYMLINK libspdk_event_ublk.so 00:02:13.306 SYMLINK libspdk_event_scsi.so 00:02:13.306 SYMLINK libspdk_event_nvmf.so 00:02:13.564 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:13.564 CC module/event/subsystems/iscsi/iscsi.o 00:02:13.822 LIB libspdk_event_vhost_scsi.a 00:02:13.822 LIB libspdk_event_iscsi.a 00:02:13.822 SO libspdk_event_vhost_scsi.so.3.0 00:02:13.822 SO libspdk_event_iscsi.so.6.0 00:02:13.822 SYMLINK libspdk_event_vhost_scsi.so 00:02:13.822 SYMLINK libspdk_event_iscsi.so 00:02:14.080 SO libspdk.so.6.0 00:02:14.080 SYMLINK libspdk.so 00:02:14.338 CXX app/trace/trace.o 00:02:14.338 CC app/spdk_nvme_identify/identify.o 00:02:14.338 CC app/trace_record/trace_record.o 00:02:14.338 CC app/spdk_nvme_perf/perf.o 00:02:14.338 TEST_HEADER include/spdk/accel_module.h 00:02:14.338 TEST_HEADER include/spdk/accel.h 00:02:14.338 TEST_HEADER include/spdk/assert.h 00:02:14.338 TEST_HEADER include/spdk/barrier.h 00:02:14.338 CC app/spdk_top/spdk_top.o 00:02:14.338 TEST_HEADER include/spdk/base64.h 00:02:14.338 CC test/rpc_client/rpc_client_test.o 00:02:14.338 TEST_HEADER include/spdk/bdev_module.h 00:02:14.338 TEST_HEADER include/spdk/bdev_zone.h 00:02:14.338 TEST_HEADER include/spdk/bdev.h 00:02:14.338 CC app/spdk_nvme_discover/discovery_aer.o 00:02:14.338 CC app/spdk_lspci/spdk_lspci.o 00:02:14.338 TEST_HEADER include/spdk/bit_array.h 00:02:14.338 TEST_HEADER include/spdk/bit_pool.h 00:02:14.338 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:14.338 TEST_HEADER include/spdk/blobfs.h 00:02:14.338 TEST_HEADER include/spdk/blob_bdev.h 00:02:14.338 TEST_HEADER include/spdk/blob.h 00:02:14.338 TEST_HEADER include/spdk/config.h 00:02:14.338 TEST_HEADER include/spdk/cpuset.h 00:02:14.338 TEST_HEADER include/spdk/conf.h 00:02:14.338 TEST_HEADER include/spdk/crc16.h 00:02:14.338 TEST_HEADER include/spdk/crc32.h 00:02:14.338 TEST_HEADER include/spdk/dif.h 00:02:14.338 TEST_HEADER include/spdk/dma.h 00:02:14.338 TEST_HEADER include/spdk/crc64.h 00:02:14.338 TEST_HEADER include/spdk/endian.h 00:02:14.338 TEST_HEADER include/spdk/env_dpdk.h 00:02:14.338 TEST_HEADER include/spdk/env.h 00:02:14.338 TEST_HEADER include/spdk/event.h 00:02:14.338 TEST_HEADER include/spdk/fd_group.h 00:02:14.338 TEST_HEADER include/spdk/ftl.h 00:02:14.338 TEST_HEADER include/spdk/fd.h 00:02:14.338 TEST_HEADER include/spdk/file.h 00:02:14.338 TEST_HEADER include/spdk/gpt_spec.h 00:02:14.605 TEST_HEADER include/spdk/hexlify.h 00:02:14.605 TEST_HEADER include/spdk/idxd.h 00:02:14.605 TEST_HEADER include/spdk/histogram_data.h 00:02:14.605 TEST_HEADER include/spdk/init.h 00:02:14.605 TEST_HEADER include/spdk/idxd_spec.h 00:02:14.605 TEST_HEADER include/spdk/ioat.h 00:02:14.605 TEST_HEADER include/spdk/iscsi_spec.h 00:02:14.605 TEST_HEADER include/spdk/ioat_spec.h 00:02:14.605 TEST_HEADER include/spdk/json.h 00:02:14.605 TEST_HEADER include/spdk/jsonrpc.h 00:02:14.605 TEST_HEADER include/spdk/keyring_module.h 00:02:14.605 TEST_HEADER include/spdk/keyring.h 00:02:14.605 TEST_HEADER include/spdk/likely.h 00:02:14.605 CC app/iscsi_tgt/iscsi_tgt.o 00:02:14.605 TEST_HEADER include/spdk/log.h 00:02:14.605 TEST_HEADER include/spdk/lvol.h 00:02:14.605 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:14.605 TEST_HEADER include/spdk/mmio.h 00:02:14.605 TEST_HEADER include/spdk/memory.h 00:02:14.605 TEST_HEADER include/spdk/nbd.h 00:02:14.605 TEST_HEADER include/spdk/notify.h 00:02:14.605 TEST_HEADER include/spdk/nvme_intel.h 00:02:14.605 TEST_HEADER include/spdk/nvme.h 00:02:14.605 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:14.605 TEST_HEADER include/spdk/nvme_spec.h 00:02:14.605 TEST_HEADER include/spdk/nvme_zns.h 00:02:14.605 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:14.605 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:14.605 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:14.605 TEST_HEADER include/spdk/nvmf_spec.h 00:02:14.605 TEST_HEADER include/spdk/opal.h 00:02:14.605 TEST_HEADER include/spdk/nvmf_transport.h 00:02:14.605 TEST_HEADER include/spdk/nvmf.h 00:02:14.605 TEST_HEADER include/spdk/opal_spec.h 00:02:14.605 CC app/nvmf_tgt/nvmf_main.o 00:02:14.605 CC app/spdk_dd/spdk_dd.o 00:02:14.605 TEST_HEADER include/spdk/pipe.h 00:02:14.605 TEST_HEADER include/spdk/pci_ids.h 00:02:14.605 CC app/spdk_tgt/spdk_tgt.o 00:02:14.605 TEST_HEADER include/spdk/queue.h 00:02:14.605 TEST_HEADER include/spdk/reduce.h 00:02:14.605 TEST_HEADER include/spdk/scheduler.h 00:02:14.605 TEST_HEADER include/spdk/rpc.h 00:02:14.605 TEST_HEADER include/spdk/scsi.h 00:02:14.605 TEST_HEADER include/spdk/scsi_spec.h 00:02:14.605 TEST_HEADER include/spdk/sock.h 00:02:14.605 TEST_HEADER include/spdk/stdinc.h 00:02:14.605 TEST_HEADER include/spdk/thread.h 00:02:14.605 TEST_HEADER include/spdk/string.h 00:02:14.605 TEST_HEADER include/spdk/trace.h 00:02:14.605 TEST_HEADER include/spdk/trace_parser.h 00:02:14.605 TEST_HEADER include/spdk/tree.h 00:02:14.605 TEST_HEADER include/spdk/util.h 00:02:14.605 TEST_HEADER include/spdk/ublk.h 00:02:14.605 TEST_HEADER include/spdk/version.h 00:02:14.605 TEST_HEADER include/spdk/uuid.h 00:02:14.605 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:14.605 TEST_HEADER include/spdk/vmd.h 00:02:14.605 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:14.605 TEST_HEADER include/spdk/vhost.h 00:02:14.605 TEST_HEADER include/spdk/xor.h 00:02:14.605 TEST_HEADER include/spdk/zipf.h 00:02:14.605 CXX test/cpp_headers/accel.o 00:02:14.605 CXX test/cpp_headers/accel_module.o 00:02:14.605 CXX test/cpp_headers/assert.o 00:02:14.605 CXX test/cpp_headers/barrier.o 00:02:14.605 CXX test/cpp_headers/base64.o 00:02:14.605 CXX test/cpp_headers/bdev.o 00:02:14.605 CXX test/cpp_headers/bdev_module.o 00:02:14.605 CXX test/cpp_headers/bdev_zone.o 00:02:14.605 CXX test/cpp_headers/bit_pool.o 00:02:14.605 CXX test/cpp_headers/bit_array.o 00:02:14.605 CXX test/cpp_headers/blobfs_bdev.o 00:02:14.605 CXX test/cpp_headers/blobfs.o 00:02:14.605 CXX test/cpp_headers/blob_bdev.o 00:02:14.605 CXX test/cpp_headers/blob.o 00:02:14.605 CXX test/cpp_headers/conf.o 00:02:14.605 CXX test/cpp_headers/cpuset.o 00:02:14.605 CXX test/cpp_headers/config.o 00:02:14.605 CXX test/cpp_headers/crc16.o 00:02:14.605 CXX test/cpp_headers/crc64.o 00:02:14.605 CXX test/cpp_headers/crc32.o 00:02:14.605 CXX test/cpp_headers/dma.o 00:02:14.605 CXX test/cpp_headers/dif.o 00:02:14.605 CXX test/cpp_headers/env.o 00:02:14.605 CXX test/cpp_headers/endian.o 00:02:14.605 CXX test/cpp_headers/event.o 00:02:14.605 CXX test/cpp_headers/env_dpdk.o 00:02:14.605 CXX test/cpp_headers/fd.o 00:02:14.605 CXX test/cpp_headers/fd_group.o 00:02:14.605 CXX test/cpp_headers/ftl.o 00:02:14.605 CXX test/cpp_headers/file.o 00:02:14.605 CXX test/cpp_headers/gpt_spec.o 00:02:14.605 CXX test/cpp_headers/histogram_data.o 00:02:14.605 CXX test/cpp_headers/hexlify.o 00:02:14.605 CXX test/cpp_headers/idxd_spec.o 00:02:14.605 CXX test/cpp_headers/idxd.o 00:02:14.605 CXX test/cpp_headers/init.o 00:02:14.605 CXX test/cpp_headers/ioat.o 00:02:14.605 CXX test/cpp_headers/json.o 00:02:14.605 CXX test/cpp_headers/iscsi_spec.o 00:02:14.605 CXX test/cpp_headers/ioat_spec.o 00:02:14.605 CXX test/cpp_headers/jsonrpc.o 00:02:14.605 CXX test/cpp_headers/keyring.o 00:02:14.605 CXX test/cpp_headers/keyring_module.o 00:02:14.605 CXX test/cpp_headers/likely.o 00:02:14.605 CXX test/cpp_headers/lvol.o 00:02:14.605 CXX test/cpp_headers/log.o 00:02:14.605 CXX test/cpp_headers/mmio.o 00:02:14.605 CXX test/cpp_headers/nbd.o 00:02:14.605 CXX test/cpp_headers/memory.o 00:02:14.605 CXX test/cpp_headers/notify.o 00:02:14.605 CXX test/cpp_headers/nvme.o 00:02:14.605 CXX test/cpp_headers/nvme_intel.o 00:02:14.605 CXX test/cpp_headers/nvme_ocssd.o 00:02:14.605 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:14.605 CXX test/cpp_headers/nvme_spec.o 00:02:14.605 CXX test/cpp_headers/nvme_zns.o 00:02:14.605 CXX test/cpp_headers/nvmf_cmd.o 00:02:14.605 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:14.605 CXX test/cpp_headers/nvmf.o 00:02:14.605 CXX test/cpp_headers/nvmf_spec.o 00:02:14.605 CXX test/cpp_headers/nvmf_transport.o 00:02:14.605 CXX test/cpp_headers/opal.o 00:02:14.605 CXX test/cpp_headers/opal_spec.o 00:02:14.605 CXX test/cpp_headers/pci_ids.o 00:02:14.605 CXX test/cpp_headers/pipe.o 00:02:14.605 CXX test/cpp_headers/queue.o 00:02:14.605 CXX test/cpp_headers/reduce.o 00:02:14.605 CXX test/cpp_headers/rpc.o 00:02:14.605 CXX test/cpp_headers/scsi.o 00:02:14.605 CC examples/util/zipf/zipf.o 00:02:14.605 CXX test/cpp_headers/scheduler.o 00:02:14.605 CXX test/cpp_headers/scsi_spec.o 00:02:14.605 CXX test/cpp_headers/sock.o 00:02:14.605 CXX test/cpp_headers/stdinc.o 00:02:14.605 CXX test/cpp_headers/string.o 00:02:14.605 CXX test/cpp_headers/thread.o 00:02:14.605 CC test/app/histogram_perf/histogram_perf.o 00:02:14.605 CXX test/cpp_headers/trace.o 00:02:14.605 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:14.605 CXX test/cpp_headers/trace_parser.o 00:02:14.605 CXX test/cpp_headers/tree.o 00:02:14.605 CXX test/cpp_headers/ublk.o 00:02:14.605 CC test/app/jsoncat/jsoncat.o 00:02:14.605 CXX test/cpp_headers/util.o 00:02:14.605 CXX test/cpp_headers/uuid.o 00:02:14.605 CXX test/cpp_headers/vfio_user_pci.o 00:02:14.605 CXX test/cpp_headers/version.o 00:02:14.605 CC test/env/pci/pci_ut.o 00:02:14.605 CC test/env/memory/memory_ut.o 00:02:14.605 CC test/env/vtophys/vtophys.o 00:02:14.605 CC examples/ioat/verify/verify.o 00:02:14.605 CC app/fio/nvme/fio_plugin.o 00:02:14.605 CC examples/ioat/perf/perf.o 00:02:14.605 CC test/thread/poller_perf/poller_perf.o 00:02:14.605 CC test/app/stub/stub.o 00:02:14.605 CXX test/cpp_headers/vfio_user_spec.o 00:02:14.605 CC test/dma/test_dma/test_dma.o 00:02:14.887 CXX test/cpp_headers/vhost.o 00:02:14.887 CC test/app/bdev_svc/bdev_svc.o 00:02:14.887 LINK spdk_lspci 00:02:14.887 CC app/fio/bdev/fio_plugin.o 00:02:14.887 LINK rpc_client_test 00:02:15.158 LINK spdk_nvme_discover 00:02:15.158 CC test/env/mem_callbacks/mem_callbacks.o 00:02:15.158 LINK spdk_trace_record 00:02:15.158 LINK interrupt_tgt 00:02:15.158 LINK spdk_tgt 00:02:15.158 LINK iscsi_tgt 00:02:15.158 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:15.158 LINK nvmf_tgt 00:02:15.158 LINK histogram_perf 00:02:15.417 LINK jsoncat 00:02:15.417 LINK zipf 00:02:15.417 CXX test/cpp_headers/vmd.o 00:02:15.417 CXX test/cpp_headers/xor.o 00:02:15.417 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:15.417 CXX test/cpp_headers/zipf.o 00:02:15.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:15.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:15.417 LINK vtophys 00:02:15.417 LINK poller_perf 00:02:15.417 LINK env_dpdk_post_init 00:02:15.417 LINK verify 00:02:15.417 LINK stub 00:02:15.417 LINK bdev_svc 00:02:15.417 LINK ioat_perf 00:02:15.417 LINK spdk_dd 00:02:15.417 LINK spdk_trace 00:02:15.674 LINK pci_ut 00:02:15.674 LINK test_dma 00:02:15.674 LINK spdk_bdev 00:02:15.674 LINK nvme_fuzz 00:02:15.674 LINK spdk_nvme 00:02:15.674 LINK spdk_nvme_perf 00:02:15.674 LINK vhost_fuzz 00:02:15.674 LINK spdk_top 00:02:15.931 LINK spdk_nvme_identify 00:02:15.931 CC examples/sock/hello_world/hello_sock.o 00:02:15.931 CC examples/idxd/perf/perf.o 00:02:15.931 LINK mem_callbacks 00:02:15.931 CC examples/vmd/led/led.o 00:02:15.931 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.931 CC test/event/reactor_perf/reactor_perf.o 00:02:15.931 CC test/event/event_perf/event_perf.o 00:02:15.931 CC test/event/app_repeat/app_repeat.o 00:02:15.931 CC test/event/reactor/reactor.o 00:02:15.931 CC examples/thread/thread/thread_ex.o 00:02:15.931 CC app/vhost/vhost.o 00:02:15.931 CC test/event/scheduler/scheduler.o 00:02:15.931 LINK lsvmd 00:02:15.931 LINK led 00:02:15.931 LINK reactor_perf 00:02:15.931 LINK reactor 00:02:15.931 LINK event_perf 00:02:15.931 LINK app_repeat 00:02:15.931 LINK vhost 00:02:15.931 LINK hello_sock 00:02:16.188 LINK memory_ut 00:02:16.188 CC test/nvme/sgl/sgl.o 00:02:16.188 CC test/nvme/compliance/nvme_compliance.o 00:02:16.188 CC test/nvme/cuse/cuse.o 00:02:16.188 CC test/nvme/reset/reset.o 00:02:16.188 CC test/nvme/e2edp/nvme_dp.o 00:02:16.188 CC test/nvme/connect_stress/connect_stress.o 00:02:16.188 CC test/nvme/simple_copy/simple_copy.o 00:02:16.188 CC test/nvme/boot_partition/boot_partition.o 00:02:16.188 CC test/nvme/fused_ordering/fused_ordering.o 00:02:16.188 CC test/nvme/startup/startup.o 00:02:16.188 CC test/nvme/overhead/overhead.o 00:02:16.188 CC test/nvme/aer/aer.o 00:02:16.188 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:16.188 CC test/nvme/reserve/reserve.o 00:02:16.188 LINK thread 00:02:16.188 CC test/nvme/fdp/fdp.o 00:02:16.188 CC test/nvme/err_injection/err_injection.o 00:02:16.188 LINK idxd_perf 00:02:16.188 CC test/accel/dif/dif.o 00:02:16.188 LINK scheduler 00:02:16.188 CC test/blobfs/mkfs/mkfs.o 00:02:16.188 CC test/lvol/esnap/esnap.o 00:02:16.188 LINK startup 00:02:16.188 LINK doorbell_aers 00:02:16.188 LINK boot_partition 00:02:16.188 LINK err_injection 00:02:16.188 LINK reserve 00:02:16.188 LINK connect_stress 00:02:16.445 LINK fused_ordering 00:02:16.445 LINK sgl 00:02:16.445 LINK simple_copy 00:02:16.445 LINK nvme_dp 00:02:16.445 LINK reset 00:02:16.445 LINK aer 00:02:16.445 LINK mkfs 00:02:16.445 LINK overhead 00:02:16.445 LINK nvme_compliance 00:02:16.445 LINK fdp 00:02:16.445 CC examples/nvme/abort/abort.o 00:02:16.445 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:16.446 CC examples/nvme/arbitration/arbitration.o 00:02:16.446 CC examples/nvme/hello_world/hello_world.o 00:02:16.446 CC examples/nvme/reconnect/reconnect.o 00:02:16.446 CC examples/nvme/hotplug/hotplug.o 00:02:16.446 LINK dif 00:02:16.446 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:16.446 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:16.703 LINK iscsi_fuzz 00:02:16.703 CC examples/accel/perf/accel_perf.o 00:02:16.703 LINK cmb_copy 00:02:16.703 CC examples/blob/cli/blobcli.o 00:02:16.703 CC examples/blob/hello_world/hello_blob.o 00:02:16.703 LINK pmr_persistence 00:02:16.703 LINK hotplug 00:02:16.703 LINK hello_world 00:02:16.703 LINK arbitration 00:02:16.703 LINK abort 00:02:16.961 LINK reconnect 00:02:16.961 LINK nvme_manage 00:02:16.961 LINK hello_blob 00:02:16.961 LINK accel_perf 00:02:17.218 CC test/bdev/bdevio/bdevio.o 00:02:17.218 LINK cuse 00:02:17.218 LINK blobcli 00:02:17.476 LINK bdevio 00:02:17.476 CC examples/bdev/hello_world/hello_bdev.o 00:02:17.734 CC examples/bdev/bdevperf/bdevperf.o 00:02:17.734 LINK hello_bdev 00:02:18.306 LINK bdevperf 00:02:18.908 CC examples/nvmf/nvmf/nvmf.o 00:02:18.908 LINK nvmf 00:02:19.844 LINK esnap 00:02:19.844 00:02:19.844 real 1m11.566s 00:02:19.844 user 12m49.791s 00:02:19.844 sys 4m54.519s 00:02:19.844 10:09:44 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:19.844 10:09:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:19.844 ************************************ 00:02:19.844 END TEST make 00:02:19.844 ************************************ 00:02:20.104 10:09:44 -- common/autotest_common.sh@1142 -- $ return 0 00:02:20.104 10:09:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:20.104 10:09:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:20.104 10:09:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:20.104 10:09:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:20.104 10:09:44 -- pm/common@44 -- $ pid=1567675 00:02:20.104 10:09:44 -- pm/common@50 -- $ kill -TERM 1567675 00:02:20.104 10:09:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:20.104 10:09:44 -- pm/common@44 -- $ pid=1567676 00:02:20.104 10:09:44 -- pm/common@50 -- $ kill -TERM 1567676 00:02:20.104 10:09:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:20.104 10:09:44 -- pm/common@44 -- $ pid=1567678 00:02:20.104 10:09:44 -- pm/common@50 -- $ kill -TERM 1567678 00:02:20.104 10:09:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:20.104 10:09:44 -- pm/common@44 -- $ pid=1567702 00:02:20.104 10:09:44 -- pm/common@50 -- $ sudo -E kill -TERM 1567702 00:02:20.104 10:09:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:20.104 10:09:44 -- nvmf/common.sh@7 -- # uname -s 00:02:20.104 10:09:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:20.104 10:09:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:20.104 10:09:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:20.104 10:09:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:20.104 10:09:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:20.104 10:09:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:20.104 10:09:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:20.104 10:09:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:20.104 10:09:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:20.104 10:09:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:20.104 10:09:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:20.104 10:09:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:20.104 10:09:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:20.104 10:09:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:20.104 10:09:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:20.104 10:09:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:20.104 10:09:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:20.104 10:09:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:20.104 10:09:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.104 10:09:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.104 10:09:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.104 10:09:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.104 10:09:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.104 10:09:44 -- paths/export.sh@5 -- # export PATH 00:02:20.104 10:09:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.104 10:09:44 -- nvmf/common.sh@47 -- # : 0 00:02:20.104 10:09:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:20.104 10:09:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:20.104 10:09:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:20.104 10:09:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:20.104 10:09:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:20.104 10:09:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:20.104 10:09:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:20.104 10:09:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:20.104 10:09:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:20.104 10:09:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:20.104 10:09:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:20.104 10:09:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:20.104 10:09:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:20.104 10:09:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:20.104 10:09:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:20.104 10:09:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:20.104 10:09:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:20.104 10:09:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:20.104 10:09:44 -- spdk/autotest.sh@48 -- # udevadm_pid=1635936 00:02:20.104 10:09:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:20.104 10:09:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:20.104 10:09:44 -- pm/common@17 -- # local monitor 00:02:20.104 10:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@21 -- # date +%s 00:02:20.104 10:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.104 10:09:44 -- pm/common@21 -- # date +%s 00:02:20.104 10:09:44 -- pm/common@25 -- # sleep 1 00:02:20.104 10:09:44 -- pm/common@21 -- # date +%s 00:02:20.104 10:09:44 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721030984 00:02:20.104 10:09:44 -- pm/common@21 -- # date +%s 00:02:20.104 10:09:44 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721030984 00:02:20.104 10:09:44 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721030984 00:02:20.104 10:09:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721030984 00:02:20.364 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721030984_collect-cpu-temp.pm.log 00:02:20.364 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721030984_collect-vmstat.pm.log 00:02:20.364 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721030984_collect-cpu-load.pm.log 00:02:20.364 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721030984_collect-bmc-pm.bmc.pm.log 00:02:21.300 10:09:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:21.300 10:09:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:21.300 10:09:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:21.300 10:09:45 -- common/autotest_common.sh@10 -- # set +x 00:02:21.300 10:09:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:21.300 10:09:45 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:21.300 10:09:45 -- common/autotest_common.sh@10 -- # set +x 00:02:21.300 10:09:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:21.300 10:09:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:21.300 10:09:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:21.300 10:09:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:21.300 10:09:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:21.300 10:09:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:21.300 10:09:45 -- common/autotest_common.sh@1455 -- # uname 00:02:21.300 10:09:45 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:21.300 10:09:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:21.300 10:09:45 -- common/autotest_common.sh@1475 -- # uname 00:02:21.300 10:09:45 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:21.300 10:09:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:21.300 10:09:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:21.300 10:09:45 -- spdk/autotest.sh@72 -- # hash lcov 00:02:21.300 10:09:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:21.300 10:09:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:21.300 --rc lcov_branch_coverage=1 00:02:21.300 --rc lcov_function_coverage=1 00:02:21.300 --rc genhtml_branch_coverage=1 00:02:21.300 --rc genhtml_function_coverage=1 00:02:21.300 --rc genhtml_legend=1 00:02:21.300 --rc geninfo_all_blocks=1 00:02:21.300 ' 00:02:21.300 10:09:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:21.300 --rc lcov_branch_coverage=1 00:02:21.300 --rc lcov_function_coverage=1 00:02:21.300 --rc genhtml_branch_coverage=1 00:02:21.300 --rc genhtml_function_coverage=1 00:02:21.300 --rc genhtml_legend=1 00:02:21.300 --rc geninfo_all_blocks=1 00:02:21.300 ' 00:02:21.300 10:09:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:21.300 --rc lcov_branch_coverage=1 00:02:21.300 --rc lcov_function_coverage=1 00:02:21.300 --rc genhtml_branch_coverage=1 00:02:21.300 --rc genhtml_function_coverage=1 00:02:21.300 --rc genhtml_legend=1 00:02:21.300 --rc geninfo_all_blocks=1 00:02:21.301 --no-external' 00:02:21.301 10:09:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:21.301 --rc lcov_branch_coverage=1 00:02:21.301 --rc lcov_function_coverage=1 00:02:21.301 --rc genhtml_branch_coverage=1 00:02:21.301 --rc genhtml_function_coverage=1 00:02:21.301 --rc genhtml_legend=1 00:02:21.301 --rc geninfo_all_blocks=1 00:02:21.301 --no-external' 00:02:21.301 10:09:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:21.301 lcov: LCOV version 1.14 00:02:21.301 10:09:46 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.590 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.590 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.849 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.849 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.850 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:24.850 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:25.111 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:25.111 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:37.318 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:37.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:42.583 10:10:06 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:42.583 10:10:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:42.583 10:10:06 -- common/autotest_common.sh@10 -- # set +x 00:02:42.583 10:10:06 -- spdk/autotest.sh@91 -- # rm -f 00:02:42.583 10:10:06 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.863 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.863 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.863 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.863 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.135 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:46.136 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:46.449 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:46.449 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:46.449 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:46.449 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:46.449 10:10:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:46.449 10:10:11 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:46.449 10:10:11 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:46.449 10:10:11 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:46.449 10:10:11 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:46.449 10:10:11 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:46.449 10:10:11 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:46.449 10:10:11 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.449 10:10:11 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:46.449 10:10:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:46.449 10:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:46.449 10:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:46.449 10:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:46.449 10:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:46.449 10:10:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:46.449 No valid GPT data, bailing 00:02:46.449 10:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:46.449 10:10:11 -- scripts/common.sh@391 -- # pt= 00:02:46.449 10:10:11 -- scripts/common.sh@392 -- # return 1 00:02:46.449 10:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:46.449 1+0 records in 00:02:46.449 1+0 records out 00:02:46.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00242003 s, 433 MB/s 00:02:46.449 10:10:11 -- spdk/autotest.sh@118 -- # sync 00:02:46.449 10:10:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:46.449 10:10:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:46.449 10:10:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:53.016 10:10:17 -- spdk/autotest.sh@124 -- # uname -s 00:02:53.016 10:10:17 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:53.016 10:10:17 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:02:53.016 10:10:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.016 10:10:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.016 10:10:17 -- common/autotest_common.sh@10 -- # set +x 00:02:53.016 ************************************ 00:02:53.016 START TEST setup.sh 00:02:53.016 ************************************ 00:02:53.016 10:10:17 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:02:53.016 * Looking for test storage... 00:02:53.016 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:02:53.016 10:10:17 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:53.016 10:10:17 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:53.016 10:10:17 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:02:53.016 10:10:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.016 10:10:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.016 10:10:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:53.016 ************************************ 00:02:53.016 START TEST acl 00:02:53.016 ************************************ 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:02:53.016 * Looking for test storage... 00:02:53.016 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:02:53.016 10:10:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.016 10:10:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.016 10:10:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:53.016 10:10:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:53.016 10:10:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:53.016 10:10:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:53.017 10:10:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:53.017 10:10:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.017 10:10:17 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.205 10:10:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:57.205 10:10:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:57.205 10:10:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.205 10:10:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:57.205 10:10:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.205 10:10:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:01.400 Hugepages 00:03:01.400 node hugesize free / total 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 00:03:01.400 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:01.400 10:10:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:01.400 10:10:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.400 10:10:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.400 10:10:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:01.400 ************************************ 00:03:01.400 START TEST denied 00:03:01.400 ************************************ 00:03:01.400 10:10:25 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:01.400 10:10:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:01.400 10:10:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:01.400 10:10:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:01.400 10:10:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.400 10:10:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:05.594 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.594 10:10:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.862 00:03:10.862 real 0m9.003s 00:03:10.862 user 0m2.732s 00:03:10.862 sys 0m5.520s 00:03:10.863 10:10:34 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.863 10:10:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:10.863 ************************************ 00:03:10.863 END TEST denied 00:03:10.863 ************************************ 00:03:10.863 10:10:34 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:10.863 10:10:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:10.863 10:10:34 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.863 10:10:34 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.863 10:10:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:10.863 ************************************ 00:03:10.863 START TEST allowed 00:03:10.863 ************************************ 00:03:10.863 10:10:34 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:10.863 10:10:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:10.863 10:10:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:10.863 10:10:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:10.863 10:10:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.863 10:10:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:16.135 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.135 10:10:40 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:16.135 10:10:40 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:16.135 10:10:40 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:16.135 10:10:40 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.135 10:10:40 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.327 00:03:20.327 real 0m10.219s 00:03:20.327 user 0m2.677s 00:03:20.327 sys 0m5.714s 00:03:20.327 10:10:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.327 10:10:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:20.327 ************************************ 00:03:20.327 END TEST allowed 00:03:20.327 ************************************ 00:03:20.327 10:10:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:20.327 00:03:20.327 real 0m27.645s 00:03:20.327 user 0m8.173s 00:03:20.327 sys 0m17.044s 00:03:20.327 10:10:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.327 10:10:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.327 ************************************ 00:03:20.327 END TEST acl 00:03:20.327 ************************************ 00:03:20.587 10:10:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:20.587 10:10:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.587 10:10:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.587 10:10:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.587 10:10:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:20.587 ************************************ 00:03:20.587 START TEST hugepages 00:03:20.587 ************************************ 00:03:20.587 10:10:45 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.587 * Looking for test storage... 00:03:20.587 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 37260688 kB' 'MemAvailable: 41063680 kB' 'Buffers: 11368 kB' 'Cached: 14655444 kB' 'SwapCached: 0 kB' 'Active: 11682216 kB' 'Inactive: 3531684 kB' 'Active(anon): 11269680 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550508 kB' 'Mapped: 210776 kB' 'Shmem: 10722592 kB' 'KReclaimable: 506480 kB' 'Slab: 1157904 kB' 'SReclaimable: 506480 kB' 'SUnreclaim: 651424 kB' 'KernelStack: 22304 kB' 'PageTables: 9664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 12746856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218988 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.587 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.588 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.589 10:10:45 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:20.589 10:10:45 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.589 10:10:45 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.589 10:10:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.849 ************************************ 00:03:20.849 START TEST default_setup 00:03:20.849 ************************************ 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.849 10:10:45 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:25.039 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:25.039 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:26.962 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39443748 kB' 'MemAvailable: 43245812 kB' 'Buffers: 11368 kB' 'Cached: 14655576 kB' 'SwapCached: 0 kB' 'Active: 11696252 kB' 'Inactive: 3531684 kB' 'Active(anon): 11283716 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564380 kB' 'Mapped: 210148 kB' 'Shmem: 10722724 kB' 'KReclaimable: 505552 kB' 'Slab: 1155108 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649556 kB' 'KernelStack: 22176 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12726792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.962 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39444080 kB' 'MemAvailable: 43246144 kB' 'Buffers: 11368 kB' 'Cached: 14655576 kB' 'SwapCached: 0 kB' 'Active: 11696172 kB' 'Inactive: 3531684 kB' 'Active(anon): 11283636 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564180 kB' 'Mapped: 210088 kB' 'Shmem: 10722724 kB' 'KReclaimable: 505552 kB' 'Slab: 1155108 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649556 kB' 'KernelStack: 22192 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12726808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.963 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39442788 kB' 'MemAvailable: 43244852 kB' 'Buffers: 11368 kB' 'Cached: 14655596 kB' 'SwapCached: 0 kB' 'Active: 11696896 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284360 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564820 kB' 'Mapped: 210088 kB' 'Shmem: 10722744 kB' 'KReclaimable: 505552 kB' 'Slab: 1155236 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649684 kB' 'KernelStack: 22528 kB' 'PageTables: 9800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12726832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.964 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.965 nr_hugepages=1024 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.965 resv_hugepages=0 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.965 surplus_hugepages=0 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.965 anon_hugepages=0 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39444252 kB' 'MemAvailable: 43246316 kB' 'Buffers: 11368 kB' 'Cached: 14655596 kB' 'SwapCached: 0 kB' 'Active: 11696328 kB' 'Inactive: 3531684 kB' 'Active(anon): 11283792 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564468 kB' 'Mapped: 210000 kB' 'Shmem: 10722744 kB' 'KReclaimable: 505552 kB' 'Slab: 1155204 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649652 kB' 'KernelStack: 22336 kB' 'PageTables: 9832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12726852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.965 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21055300 kB' 'MemUsed: 11583840 kB' 'SwapCached: 0 kB' 'Active: 7305916 kB' 'Inactive: 175472 kB' 'Active(anon): 7100836 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085532 kB' 'Mapped: 153848 kB' 'AnonPages: 399076 kB' 'Shmem: 6704980 kB' 'KernelStack: 12552 kB' 'PageTables: 6780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 473784 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 316768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.966 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.967 node0=1024 expecting 1024 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.967 00:03:26.967 real 0m6.187s 00:03:26.967 user 0m1.596s 00:03:26.967 sys 0m2.789s 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.967 10:10:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:26.967 ************************************ 00:03:26.967 END TEST default_setup 00:03:26.967 ************************************ 00:03:26.967 10:10:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.967 10:10:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:26.967 10:10:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.967 10:10:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.967 10:10:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.967 ************************************ 00:03:26.967 START TEST per_node_1G_alloc 00:03:26.967 ************************************ 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.967 10:10:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:31.162 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.162 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.162 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39430624 kB' 'MemAvailable: 43232688 kB' 'Buffers: 11368 kB' 'Cached: 14655736 kB' 'SwapCached: 0 kB' 'Active: 11694808 kB' 'Inactive: 3531684 kB' 'Active(anon): 11282272 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562180 kB' 'Mapped: 209060 kB' 'Shmem: 10722884 kB' 'KReclaimable: 505552 kB' 'Slab: 1156120 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650568 kB' 'KernelStack: 22048 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.163 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39431368 kB' 'MemAvailable: 43233432 kB' 'Buffers: 11368 kB' 'Cached: 14655736 kB' 'SwapCached: 0 kB' 'Active: 11693776 kB' 'Inactive: 3531684 kB' 'Active(anon): 11281240 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561660 kB' 'Mapped: 208872 kB' 'Shmem: 10722884 kB' 'KReclaimable: 505552 kB' 'Slab: 1156096 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650544 kB' 'KernelStack: 22032 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.164 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.165 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39431368 kB' 'MemAvailable: 43233432 kB' 'Buffers: 11368 kB' 'Cached: 14655736 kB' 'SwapCached: 0 kB' 'Active: 11693776 kB' 'Inactive: 3531684 kB' 'Active(anon): 11281240 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561660 kB' 'Mapped: 208872 kB' 'Shmem: 10722884 kB' 'KReclaimable: 505552 kB' 'Slab: 1156096 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650544 kB' 'KernelStack: 22032 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.166 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.167 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.168 nr_hugepages=1024 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.168 resv_hugepages=0 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.168 surplus_hugepages=0 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.168 anon_hugepages=0 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39431740 kB' 'MemAvailable: 43233804 kB' 'Buffers: 11368 kB' 'Cached: 14655780 kB' 'SwapCached: 0 kB' 'Active: 11693820 kB' 'Inactive: 3531684 kB' 'Active(anon): 11281284 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561668 kB' 'Mapped: 208872 kB' 'Shmem: 10722928 kB' 'KReclaimable: 505552 kB' 'Slab: 1156096 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650544 kB' 'KernelStack: 22032 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.168 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.169 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22099160 kB' 'MemUsed: 10539980 kB' 'SwapCached: 0 kB' 'Active: 7303228 kB' 'Inactive: 175472 kB' 'Active(anon): 7098148 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085536 kB' 'Mapped: 152808 kB' 'AnonPages: 396300 kB' 'Shmem: 6704984 kB' 'KernelStack: 12152 kB' 'PageTables: 5824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 474612 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 317596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.170 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.171 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 17332224 kB' 'MemUsed: 10323848 kB' 'SwapCached: 0 kB' 'Active: 4390656 kB' 'Inactive: 3356212 kB' 'Active(anon): 4183200 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7581656 kB' 'Mapped: 56064 kB' 'AnonPages: 165364 kB' 'Shmem: 4017988 kB' 'KernelStack: 9880 kB' 'PageTables: 2800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 348536 kB' 'Slab: 681484 kB' 'SReclaimable: 348536 kB' 'SUnreclaim: 332948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.172 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:31.173 node0=512 expecting 512 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:31.173 node1=512 expecting 512 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:31.173 00:03:31.173 real 0m4.190s 00:03:31.173 user 0m1.604s 00:03:31.173 sys 0m2.666s 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.173 10:10:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.173 ************************************ 00:03:31.173 END TEST per_node_1G_alloc 00:03:31.173 ************************************ 00:03:31.173 10:10:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:31.173 10:10:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:31.173 10:10:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.173 10:10:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.173 10:10:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.173 ************************************ 00:03:31.173 START TEST even_2G_alloc 00:03:31.173 ************************************ 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.173 10:10:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:35.378 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.378 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:35.378 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39426200 kB' 'MemAvailable: 43228264 kB' 'Buffers: 11368 kB' 'Cached: 14655896 kB' 'SwapCached: 0 kB' 'Active: 11694652 kB' 'Inactive: 3531684 kB' 'Active(anon): 11282116 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562252 kB' 'Mapped: 208932 kB' 'Shmem: 10723044 kB' 'KReclaimable: 505552 kB' 'Slab: 1155696 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650144 kB' 'KernelStack: 21984 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.379 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39427992 kB' 'MemAvailable: 43230056 kB' 'Buffers: 11368 kB' 'Cached: 14655900 kB' 'SwapCached: 0 kB' 'Active: 11694636 kB' 'Inactive: 3531684 kB' 'Active(anon): 11282100 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562232 kB' 'Mapped: 208884 kB' 'Shmem: 10723048 kB' 'KReclaimable: 505552 kB' 'Slab: 1155656 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650104 kB' 'KernelStack: 21984 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.380 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.381 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39427992 kB' 'MemAvailable: 43230056 kB' 'Buffers: 11368 kB' 'Cached: 14655900 kB' 'SwapCached: 0 kB' 'Active: 11694636 kB' 'Inactive: 3531684 kB' 'Active(anon): 11282100 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562284 kB' 'Mapped: 208884 kB' 'Shmem: 10723048 kB' 'KReclaimable: 505552 kB' 'Slab: 1155656 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650104 kB' 'KernelStack: 22000 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12715768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.382 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.383 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.384 nr_hugepages=1024 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.384 resv_hugepages=0 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.384 surplus_hugepages=0 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.384 anon_hugepages=0 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39428060 kB' 'MemAvailable: 43230124 kB' 'Buffers: 11368 kB' 'Cached: 14655972 kB' 'SwapCached: 0 kB' 'Active: 11694736 kB' 'Inactive: 3531684 kB' 'Active(anon): 11282200 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562320 kB' 'Mapped: 208884 kB' 'Shmem: 10723120 kB' 'KReclaimable: 505552 kB' 'Slab: 1155656 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650104 kB' 'KernelStack: 22016 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12716160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.384 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.385 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22095144 kB' 'MemUsed: 10543996 kB' 'SwapCached: 0 kB' 'Active: 7303276 kB' 'Inactive: 175472 kB' 'Active(anon): 7098196 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085540 kB' 'Mapped: 152820 kB' 'AnonPages: 396304 kB' 'Shmem: 6704988 kB' 'KernelStack: 12152 kB' 'PageTables: 5824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 474172 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 317156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.386 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.387 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 17332664 kB' 'MemUsed: 10323408 kB' 'SwapCached: 0 kB' 'Active: 4391868 kB' 'Inactive: 3356212 kB' 'Active(anon): 4184412 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7581824 kB' 'Mapped: 56064 kB' 'AnonPages: 166428 kB' 'Shmem: 4018156 kB' 'KernelStack: 9896 kB' 'PageTables: 2856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 348536 kB' 'Slab: 681484 kB' 'SReclaimable: 348536 kB' 'SUnreclaim: 332948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.388 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:10:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.389 node0=512 expecting 512 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:35.389 node1=512 expecting 512 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.389 00:03:35.389 real 0m4.080s 00:03:35.389 user 0m1.414s 00:03:35.389 sys 0m2.663s 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.389 10:11:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.389 ************************************ 00:03:35.389 END TEST even_2G_alloc 00:03:35.389 ************************************ 00:03:35.389 10:11:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.389 10:11:00 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:35.389 10:11:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.389 10:11:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.389 10:11:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.389 ************************************ 00:03:35.389 START TEST odd_alloc 00:03:35.389 ************************************ 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:35.389 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.390 10:11:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:39.623 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:39.623 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39417752 kB' 'MemAvailable: 43219816 kB' 'Buffers: 11368 kB' 'Cached: 14656080 kB' 'SwapCached: 0 kB' 'Active: 11696984 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284448 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564060 kB' 'Mapped: 209008 kB' 'Shmem: 10723228 kB' 'KReclaimable: 505552 kB' 'Slab: 1155448 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649896 kB' 'KernelStack: 22112 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12716924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.623 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.624 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39418840 kB' 'MemAvailable: 43220904 kB' 'Buffers: 11368 kB' 'Cached: 14656084 kB' 'SwapCached: 0 kB' 'Active: 11696032 kB' 'Inactive: 3531684 kB' 'Active(anon): 11283496 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563624 kB' 'Mapped: 208924 kB' 'Shmem: 10723232 kB' 'KReclaimable: 505552 kB' 'Slab: 1155432 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649880 kB' 'KernelStack: 22096 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12716940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.625 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.626 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39419596 kB' 'MemAvailable: 43221660 kB' 'Buffers: 11368 kB' 'Cached: 14656100 kB' 'SwapCached: 0 kB' 'Active: 11695784 kB' 'Inactive: 3531684 kB' 'Active(anon): 11283248 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563284 kB' 'Mapped: 208924 kB' 'Shmem: 10723248 kB' 'KReclaimable: 505552 kB' 'Slab: 1155432 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649880 kB' 'KernelStack: 22096 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12716960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.627 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:39.628 nr_hugepages=1025 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.628 resv_hugepages=0 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.628 surplus_hugepages=0 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.628 anon_hugepages=0 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.628 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39419596 kB' 'MemAvailable: 43221660 kB' 'Buffers: 11368 kB' 'Cached: 14656100 kB' 'SwapCached: 0 kB' 'Active: 11695820 kB' 'Inactive: 3531684 kB' 'Active(anon): 11283284 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563320 kB' 'Mapped: 208924 kB' 'Shmem: 10723248 kB' 'KReclaimable: 505552 kB' 'Slab: 1155432 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649880 kB' 'KernelStack: 22112 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 12716984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.629 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22082500 kB' 'MemUsed: 10556640 kB' 'SwapCached: 0 kB' 'Active: 7305512 kB' 'Inactive: 175472 kB' 'Active(anon): 7100432 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085572 kB' 'Mapped: 152836 kB' 'AnonPages: 398560 kB' 'Shmem: 6705020 kB' 'KernelStack: 12232 kB' 'PageTables: 6068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 474056 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 317040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.630 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.631 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 17337032 kB' 'MemUsed: 10319040 kB' 'SwapCached: 0 kB' 'Active: 4390268 kB' 'Inactive: 3356212 kB' 'Active(anon): 4182812 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7581972 kB' 'Mapped: 56088 kB' 'AnonPages: 164648 kB' 'Shmem: 4018304 kB' 'KernelStack: 9848 kB' 'PageTables: 2696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 348536 kB' 'Slab: 681376 kB' 'SReclaimable: 348536 kB' 'SUnreclaim: 332840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.632 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:39.633 node0=512 expecting 513 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:39.633 node1=513 expecting 512 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:39.633 00:03:39.633 real 0m4.279s 00:03:39.633 user 0m1.563s 00:03:39.633 sys 0m2.798s 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.633 10:11:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:39.633 ************************************ 00:03:39.633 END TEST odd_alloc 00:03:39.633 ************************************ 00:03:39.893 10:11:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:39.893 10:11:04 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:39.893 10:11:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.893 10:11:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.893 10:11:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.893 ************************************ 00:03:39.893 START TEST custom_alloc 00:03:39.893 ************************************ 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.893 10:11:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:44.091 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.091 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38367812 kB' 'MemAvailable: 42169876 kB' 'Buffers: 11368 kB' 'Cached: 14656256 kB' 'SwapCached: 0 kB' 'Active: 11697084 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284548 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564332 kB' 'Mapped: 208980 kB' 'Shmem: 10723404 kB' 'KReclaimable: 505552 kB' 'Slab: 1155456 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649904 kB' 'KernelStack: 22176 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12719252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219004 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.091 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.092 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38368748 kB' 'MemAvailable: 42170812 kB' 'Buffers: 11368 kB' 'Cached: 14656256 kB' 'SwapCached: 0 kB' 'Active: 11697356 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284820 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564568 kB' 'Mapped: 208956 kB' 'Shmem: 10723404 kB' 'KReclaimable: 505552 kB' 'Slab: 1155436 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649884 kB' 'KernelStack: 22176 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12720884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218940 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.093 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38367432 kB' 'MemAvailable: 42169496 kB' 'Buffers: 11368 kB' 'Cached: 14656276 kB' 'SwapCached: 0 kB' 'Active: 11697784 kB' 'Inactive: 3531684 kB' 'Active(anon): 11285248 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565028 kB' 'Mapped: 208948 kB' 'Shmem: 10723424 kB' 'KReclaimable: 505552 kB' 'Slab: 1155604 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 650052 kB' 'KernelStack: 22304 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12720908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219036 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.094 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.095 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:44.096 nr_hugepages=1536 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.096 resv_hugepages=0 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.096 surplus_hugepages=0 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.096 anon_hugepages=0 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 38378604 kB' 'MemAvailable: 42180668 kB' 'Buffers: 11368 kB' 'Cached: 14656296 kB' 'SwapCached: 0 kB' 'Active: 11697384 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284848 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564588 kB' 'Mapped: 208956 kB' 'Shmem: 10723444 kB' 'KReclaimable: 505552 kB' 'Slab: 1155188 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649636 kB' 'KernelStack: 22304 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 12720556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219020 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.096 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.097 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 22087524 kB' 'MemUsed: 10551616 kB' 'SwapCached: 0 kB' 'Active: 7305360 kB' 'Inactive: 175472 kB' 'Active(anon): 7100280 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085604 kB' 'Mapped: 152852 kB' 'AnonPages: 398356 kB' 'Shmem: 6705052 kB' 'KernelStack: 12536 kB' 'PageTables: 7008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 473620 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 316604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.098 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.099 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656072 kB' 'MemFree: 16290964 kB' 'MemUsed: 11365108 kB' 'SwapCached: 0 kB' 'Active: 4391984 kB' 'Inactive: 3356212 kB' 'Active(anon): 4184528 kB' 'Inactive(anon): 0 kB' 'Active(file): 207456 kB' 'Inactive(file): 3356212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7582104 kB' 'Mapped: 56104 kB' 'AnonPages: 166124 kB' 'Shmem: 4018436 kB' 'KernelStack: 9800 kB' 'PageTables: 2512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 348536 kB' 'Slab: 681568 kB' 'SReclaimable: 348536 kB' 'SUnreclaim: 333032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.101 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.102 node0=512 expecting 512 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:44.102 node1=1024 expecting 1024 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:44.102 00:03:44.102 real 0m4.223s 00:03:44.102 user 0m1.577s 00:03:44.102 sys 0m2.698s 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.102 10:11:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.102 ************************************ 00:03:44.103 END TEST custom_alloc 00:03:44.103 ************************************ 00:03:44.103 10:11:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.103 10:11:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:44.103 10:11:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.103 10:11:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.103 10:11:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.103 ************************************ 00:03:44.103 START TEST no_shrink_alloc 00:03:44.103 ************************************ 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.103 10:11:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:48.297 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.297 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.297 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39420472 kB' 'MemAvailable: 43222536 kB' 'Buffers: 11368 kB' 'Cached: 14656424 kB' 'SwapCached: 0 kB' 'Active: 11696856 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284320 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564108 kB' 'Mapped: 209024 kB' 'Shmem: 10723572 kB' 'KReclaimable: 505552 kB' 'Slab: 1155472 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649920 kB' 'KernelStack: 22064 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12735128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.298 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39420788 kB' 'MemAvailable: 43222852 kB' 'Buffers: 11368 kB' 'Cached: 14656428 kB' 'SwapCached: 0 kB' 'Active: 11696776 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284240 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564012 kB' 'Mapped: 208932 kB' 'Shmem: 10723576 kB' 'KReclaimable: 505552 kB' 'Slab: 1155440 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649888 kB' 'KernelStack: 22048 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12718960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.299 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.300 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39421100 kB' 'MemAvailable: 43223164 kB' 'Buffers: 11368 kB' 'Cached: 14656448 kB' 'SwapCached: 0 kB' 'Active: 11696824 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284288 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564012 kB' 'Mapped: 208932 kB' 'Shmem: 10723596 kB' 'KReclaimable: 505552 kB' 'Slab: 1155440 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649888 kB' 'KernelStack: 22048 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12718984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.301 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.302 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.303 nr_hugepages=1024 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.303 resv_hugepages=0 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.303 surplus_hugepages=0 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.303 anon_hugepages=0 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39421368 kB' 'MemAvailable: 43223432 kB' 'Buffers: 11368 kB' 'Cached: 14656468 kB' 'SwapCached: 0 kB' 'Active: 11696824 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284288 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564012 kB' 'Mapped: 208932 kB' 'Shmem: 10723616 kB' 'KReclaimable: 505552 kB' 'Slab: 1155440 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649888 kB' 'KernelStack: 22048 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12719004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.303 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.304 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21034252 kB' 'MemUsed: 11604888 kB' 'SwapCached: 0 kB' 'Active: 7306140 kB' 'Inactive: 175472 kB' 'Active(anon): 7101060 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085652 kB' 'Mapped: 152868 kB' 'AnonPages: 399188 kB' 'Shmem: 6705100 kB' 'KernelStack: 12184 kB' 'PageTables: 5828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 473752 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 316736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.305 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.306 node0=1024 expecting 1024 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.306 10:11:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:52.507 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.507 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.507 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39426392 kB' 'MemAvailable: 43228456 kB' 'Buffers: 11368 kB' 'Cached: 14656568 kB' 'SwapCached: 0 kB' 'Active: 11698348 kB' 'Inactive: 3531684 kB' 'Active(anon): 11285812 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564912 kB' 'Mapped: 209100 kB' 'Shmem: 10723716 kB' 'KReclaimable: 505552 kB' 'Slab: 1155140 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649588 kB' 'KernelStack: 22048 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12719436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.507 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.508 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39427004 kB' 'MemAvailable: 43229068 kB' 'Buffers: 11368 kB' 'Cached: 14656572 kB' 'SwapCached: 0 kB' 'Active: 11697520 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284984 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564580 kB' 'Mapped: 208948 kB' 'Shmem: 10723720 kB' 'KReclaimable: 505552 kB' 'Slab: 1155120 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649568 kB' 'KernelStack: 22032 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12719456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.509 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39427004 kB' 'MemAvailable: 43229068 kB' 'Buffers: 11368 kB' 'Cached: 14656572 kB' 'SwapCached: 0 kB' 'Active: 11697520 kB' 'Inactive: 3531684 kB' 'Active(anon): 11284984 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564580 kB' 'Mapped: 208948 kB' 'Shmem: 10723720 kB' 'KReclaimable: 505552 kB' 'Slab: 1155120 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649568 kB' 'KernelStack: 22032 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12719476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.510 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.511 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.512 nr_hugepages=1024 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.512 resv_hugepages=0 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.512 surplus_hugepages=0 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.512 anon_hugepages=0 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.512 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295212 kB' 'MemFree: 39426500 kB' 'MemAvailable: 43228564 kB' 'Buffers: 11368 kB' 'Cached: 14656612 kB' 'SwapCached: 0 kB' 'Active: 11697572 kB' 'Inactive: 3531684 kB' 'Active(anon): 11285036 kB' 'Inactive(anon): 0 kB' 'Active(file): 412536 kB' 'Inactive(file): 3531684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564576 kB' 'Mapped: 208948 kB' 'Shmem: 10723760 kB' 'KReclaimable: 505552 kB' 'Slab: 1155120 kB' 'SReclaimable: 505552 kB' 'SUnreclaim: 649568 kB' 'KernelStack: 22032 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 12719500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 98112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3521908 kB' 'DirectMap2M: 24475648 kB' 'DirectMap1G: 40894464 kB' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.513 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 21050220 kB' 'MemUsed: 11588920 kB' 'SwapCached: 0 kB' 'Active: 7307000 kB' 'Inactive: 175472 kB' 'Active(anon): 7101920 kB' 'Inactive(anon): 0 kB' 'Active(file): 205080 kB' 'Inactive(file): 175472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7085680 kB' 'Mapped: 153144 kB' 'AnonPages: 400040 kB' 'Shmem: 6705128 kB' 'KernelStack: 12152 kB' 'PageTables: 5876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157016 kB' 'Slab: 473624 kB' 'SReclaimable: 157016 kB' 'SUnreclaim: 316608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.514 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.515 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.516 node0=1024 expecting 1024 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.516 00:03:52.516 real 0m8.206s 00:03:52.516 user 0m2.960s 00:03:52.516 sys 0m5.323s 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.516 10:11:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.516 ************************************ 00:03:52.516 END TEST no_shrink_alloc 00:03:52.516 ************************************ 00:03:52.516 10:11:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.516 10:11:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.516 00:03:52.516 real 0m31.847s 00:03:52.516 user 0m10.991s 00:03:52.516 sys 0m19.391s 00:03:52.516 10:11:17 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.516 10:11:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.516 ************************************ 00:03:52.516 END TEST hugepages 00:03:52.516 ************************************ 00:03:52.516 10:11:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.516 10:11:17 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:03:52.516 10:11:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.516 10:11:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.516 10:11:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.516 ************************************ 00:03:52.516 START TEST driver 00:03:52.516 ************************************ 00:03:52.516 10:11:17 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:03:52.516 * Looking for test storage... 00:03:52.516 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:52.516 10:11:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:52.516 10:11:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.516 10:11:17 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.081 10:11:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:59.081 10:11:22 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.081 10:11:22 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.081 10:11:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:59.081 ************************************ 00:03:59.081 START TEST guess_driver 00:03:59.081 ************************************ 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:59.081 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:59.082 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:59.082 Looking for driver=vfio-pci 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.082 10:11:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:02.389 10:11:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.289 10:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:04.289 10:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:04.289 10:11:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.289 10:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:04.289 10:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:04.289 10:11:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.289 10:11:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.876 00:04:10.876 real 0m11.731s 00:04:10.876 user 0m3.020s 00:04:10.876 sys 0m6.059s 00:04:10.876 10:11:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.876 10:11:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.876 ************************************ 00:04:10.876 END TEST guess_driver 00:04:10.876 ************************************ 00:04:10.876 10:11:34 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:10.876 00:04:10.876 real 0m17.460s 00:04:10.876 user 0m4.670s 00:04:10.876 sys 0m9.377s 00:04:10.876 10:11:34 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.876 10:11:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.876 ************************************ 00:04:10.876 END TEST driver 00:04:10.876 ************************************ 00:04:10.876 10:11:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:10.876 10:11:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:10.876 10:11:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.877 10:11:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.877 10:11:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.877 ************************************ 00:04:10.877 START TEST devices 00:04:10.877 ************************************ 00:04:10.877 10:11:34 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:10.877 * Looking for test storage... 00:04:10.877 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:10.877 10:11:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.877 10:11:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:10.877 10:11:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.877 10:11:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:15.068 10:11:39 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.068 No valid GPT data, bailing 00:04:15.068 10:11:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:15.068 10:11:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.068 10:11:39 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:15.068 10:11:39 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.068 10:11:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.068 ************************************ 00:04:15.068 START TEST nvme_mount 00:04:15.068 ************************************ 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.068 10:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:15.636 Creating new GPT entries in memory. 00:04:15.636 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.636 other utilities. 00:04:15.636 10:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.636 10:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.636 10:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.636 10:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.636 10:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:17.012 Creating new GPT entries in memory. 00:04:17.012 The operation has completed successfully. 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1674555 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.012 10:11:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.295 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.295 10:11:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.554 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:20.554 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:20.554 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.554 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.554 10:11:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:24.741 10:11:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.741 10:11:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.930 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:28.930 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:28.930 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.931 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.931 00:04:28.931 real 0m13.937s 00:04:28.931 user 0m3.782s 00:04:28.931 sys 0m7.845s 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.931 10:11:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.931 ************************************ 00:04:28.931 END TEST nvme_mount 00:04:28.931 ************************************ 00:04:28.931 10:11:53 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:28.931 10:11:53 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:28.931 10:11:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.931 10:11:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.931 10:11:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:28.931 ************************************ 00:04:28.931 START TEST dm_mount 00:04:28.931 ************************************ 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.931 10:11:53 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:29.867 Creating new GPT entries in memory. 00:04:29.867 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.867 other utilities. 00:04:29.867 10:11:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.867 10:11:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.867 10:11:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.867 10:11:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.867 10:11:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:30.803 Creating new GPT entries in memory. 00:04:30.803 The operation has completed successfully. 00:04:30.803 10:11:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.803 10:11:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.803 10:11:55 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.803 10:11:55 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.803 10:11:55 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:31.739 The operation has completed successfully. 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1679709 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.739 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.997 10:11:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.186 10:12:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:39.473 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:39.732 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:39.732 00:04:39.732 real 0m10.956s 00:04:39.732 user 0m2.600s 00:04:39.732 sys 0m5.336s 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.732 10:12:04 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.732 ************************************ 00:04:39.732 END TEST dm_mount 00:04:39.732 ************************************ 00:04:39.732 10:12:04 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.732 10:12:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:39.732 10:12:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:39.732 10:12:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.732 10:12:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.732 10:12:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.733 10:12:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.733 10:12:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.992 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.992 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.992 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.992 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.992 10:12:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:39.992 00:04:39.992 real 0m30.052s 00:04:39.992 user 0m8.136s 00:04:39.992 sys 0m16.519s 00:04:39.992 10:12:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.992 10:12:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 ************************************ 00:04:39.992 END TEST devices 00:04:39.992 ************************************ 00:04:39.992 10:12:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:39.992 00:04:39.992 real 1m47.438s 00:04:39.992 user 0m32.129s 00:04:39.992 sys 1m2.639s 00:04:39.992 10:12:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.992 10:12:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.992 ************************************ 00:04:39.992 END TEST setup.sh 00:04:39.992 ************************************ 00:04:40.250 10:12:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.251 10:12:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:43.533 Hugepages 00:04:43.533 node hugesize free / total 00:04:43.533 node0 1048576kB 0 / 0 00:04:43.533 node0 2048kB 1024 / 1024 00:04:43.533 node1 1048576kB 0 / 0 00:04:43.533 node1 2048kB 1024 / 1024 00:04:43.533 00:04:43.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.533 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:43.533 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:43.533 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:43.533 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:43.533 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:43.533 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:43.533 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:43.791 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:43.791 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:43.791 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:43.791 10:12:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:43.791 10:12:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:43.791 10:12:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:43.791 10:12:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:48.053 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.053 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.436 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.694 10:12:14 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:50.627 10:12:15 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:50.627 10:12:15 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:50.627 10:12:15 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.627 10:12:15 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:50.627 10:12:15 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:50.627 10:12:15 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:50.627 10:12:15 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.627 10:12:15 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:50.627 10:12:15 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.627 10:12:15 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:50.627 10:12:15 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:50.627 10:12:15 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.808 Waiting for block devices as requested 00:04:54.808 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:54.808 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.808 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.808 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:55.067 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:55.067 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:55.067 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:55.067 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.324 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:55.324 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:55.324 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:55.583 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:55.583 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:55.583 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:55.841 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:55.841 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.841 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:56.100 10:12:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:56.100 10:12:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:04:56.100 10:12:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:56.100 10:12:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:56.100 10:12:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:56.100 10:12:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:56.100 10:12:20 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:56.100 10:12:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:56.100 10:12:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:56.100 10:12:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:56.100 10:12:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:56.100 10:12:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:56.100 10:12:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:56.100 10:12:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:56.100 10:12:20 -- common/autotest_common.sh@1557 -- # continue 00:04:56.100 10:12:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:56.100 10:12:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:56.100 10:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:56.100 10:12:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:56.100 10:12:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.100 10:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:56.100 10:12:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:00.288 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:00.288 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:00.546 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:02.449 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:02.449 10:12:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:02.449 10:12:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.449 10:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:02.449 10:12:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:02.449 10:12:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:02.449 10:12:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.449 10:12:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:02.449 10:12:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:02.449 10:12:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:02.449 10:12:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:02.449 10:12:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:02.449 10:12:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.449 10:12:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:02.449 10:12:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:02.707 10:12:27 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:02.707 10:12:27 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:02.707 10:12:27 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:02.707 10:12:27 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:02.707 10:12:27 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:02.707 10:12:27 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:02.707 10:12:27 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:02.707 10:12:27 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:02.707 10:12:27 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:02.707 10:12:27 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1691511 00:05:02.707 10:12:27 -- common/autotest_common.sh@1598 -- # waitforlisten 1691511 00:05:02.707 10:12:27 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.707 10:12:27 -- common/autotest_common.sh@829 -- # '[' -z 1691511 ']' 00:05:02.707 10:12:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.707 10:12:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.707 10:12:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.707 10:12:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.707 10:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:02.707 [2024-07-15 10:12:27.353760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:02.707 [2024-07-15 10:12:27.353809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691511 ] 00:05:02.707 [2024-07-15 10:12:27.436233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.965 [2024-07-15 10:12:27.510842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.531 10:12:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.531 10:12:28 -- common/autotest_common.sh@862 -- # return 0 00:05:03.531 10:12:28 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:03.531 10:12:28 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:03.531 10:12:28 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:06.814 nvme0n1 00:05:06.814 10:12:31 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:06.814 [2024-07-15 10:12:31.309584] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:06.814 request: 00:05:06.814 { 00:05:06.814 "nvme_ctrlr_name": "nvme0", 00:05:06.814 "password": "test", 00:05:06.814 "method": "bdev_nvme_opal_revert", 00:05:06.814 "req_id": 1 00:05:06.814 } 00:05:06.814 Got JSON-RPC error response 00:05:06.814 response: 00:05:06.814 { 00:05:06.814 "code": -32602, 00:05:06.814 "message": "Invalid parameters" 00:05:06.814 } 00:05:06.814 10:12:31 -- common/autotest_common.sh@1604 -- # true 00:05:06.814 10:12:31 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:06.814 10:12:31 -- common/autotest_common.sh@1608 -- # killprocess 1691511 00:05:06.814 10:12:31 -- common/autotest_common.sh@948 -- # '[' -z 1691511 ']' 00:05:06.814 10:12:31 -- common/autotest_common.sh@952 -- # kill -0 1691511 00:05:06.814 10:12:31 -- common/autotest_common.sh@953 -- # uname 00:05:06.814 10:12:31 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.814 10:12:31 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1691511 00:05:06.814 10:12:31 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.814 10:12:31 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.814 10:12:31 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1691511' 00:05:06.814 killing process with pid 1691511 00:05:06.814 10:12:31 -- common/autotest_common.sh@967 -- # kill 1691511 00:05:06.814 10:12:31 -- common/autotest_common.sh@972 -- # wait 1691511 00:05:09.402 10:12:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:09.402 10:12:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:09.402 10:12:33 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:09.402 10:12:33 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:09.402 10:12:33 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:09.969 Restarting all devices. 00:05:16.535 lstat() error: No such file or directory 00:05:16.535 QAT Error: No GENERAL section found 00:05:16.535 Failed to configure qat_dev0 00:05:16.535 lstat() error: No such file or directory 00:05:16.535 QAT Error: No GENERAL section found 00:05:16.535 Failed to configure qat_dev1 00:05:16.535 lstat() error: No such file or directory 00:05:16.535 QAT Error: No GENERAL section found 00:05:16.535 Failed to configure qat_dev2 00:05:16.535 lstat() error: No such file or directory 00:05:16.535 QAT Error: No GENERAL section found 00:05:16.535 Failed to configure qat_dev3 00:05:16.535 lstat() error: No such file or directory 00:05:16.535 QAT Error: No GENERAL section found 00:05:16.535 Failed to configure qat_dev4 00:05:16.535 enable sriov 00:05:16.535 Checking status of all devices. 00:05:16.535 There is 5 QAT acceleration device(s) in the system: 00:05:16.535 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:16.535 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:16.535 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:16.535 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:16.535 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:16.535 0000:1a:00.0 set to 16 VFs 00:05:17.108 0000:1c:00.0 set to 16 VFs 00:05:18.054 0000:1e:00.0 set to 16 VFs 00:05:18.620 0000:3d:00.0 set to 16 VFs 00:05:19.587 0000:3f:00.0 set to 16 VFs 00:05:22.113 Properly configured the qat device with driver uio_pci_generic. 00:05:22.113 10:12:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:22.113 10:12:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.113 10:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.113 10:12:46 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:22.113 10:12:46 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:22.113 10:12:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.113 10:12:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.113 10:12:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.113 ************************************ 00:05:22.113 START TEST env 00:05:22.113 ************************************ 00:05:22.113 10:12:46 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:22.113 * Looking for test storage... 00:05:22.113 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:22.113 10:12:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.113 10:12:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.113 10:12:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.113 10:12:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.113 ************************************ 00:05:22.113 START TEST env_memory 00:05:22.113 ************************************ 00:05:22.113 10:12:46 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:22.113 00:05:22.113 00:05:22.113 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.113 http://cunit.sourceforge.net/ 00:05:22.113 00:05:22.113 00:05:22.113 Suite: memory 00:05:22.113 Test: alloc and free memory map ...[2024-07-15 10:12:46.729921] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:22.113 passed 00:05:22.113 Test: mem map translation ...[2024-07-15 10:12:46.748202] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:22.113 [2024-07-15 10:12:46.748216] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:22.113 [2024-07-15 10:12:46.748268] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:22.113 [2024-07-15 10:12:46.748276] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:22.113 passed 00:05:22.113 Test: mem map registration ...[2024-07-15 10:12:46.784082] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:22.113 [2024-07-15 10:12:46.784098] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:22.113 passed 00:05:22.113 Test: mem map adjacent registrations ...passed 00:05:22.113 00:05:22.113 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.113 suites 1 1 n/a 0 0 00:05:22.113 tests 4 4 4 0 0 00:05:22.113 asserts 152 152 152 0 n/a 00:05:22.113 00:05:22.113 Elapsed time = 0.132 seconds 00:05:22.113 00:05:22.113 real 0m0.146s 00:05:22.113 user 0m0.137s 00:05:22.113 sys 0m0.008s 00:05:22.113 10:12:46 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.114 10:12:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:22.114 ************************************ 00:05:22.114 END TEST env_memory 00:05:22.114 ************************************ 00:05:22.114 10:12:46 env -- common/autotest_common.sh@1142 -- # return 0 00:05:22.114 10:12:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.114 10:12:46 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.114 10:12:46 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.114 10:12:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.114 ************************************ 00:05:22.114 START TEST env_vtophys 00:05:22.114 ************************************ 00:05:22.114 10:12:46 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:22.376 EAL: lib.eal log level changed from notice to debug 00:05:22.376 EAL: Detected lcore 0 as core 0 on socket 0 00:05:22.376 EAL: Detected lcore 1 as core 1 on socket 0 00:05:22.376 EAL: Detected lcore 2 as core 2 on socket 0 00:05:22.376 EAL: Detected lcore 3 as core 3 on socket 0 00:05:22.376 EAL: Detected lcore 4 as core 4 on socket 0 00:05:22.376 EAL: Detected lcore 5 as core 5 on socket 0 00:05:22.376 EAL: Detected lcore 6 as core 6 on socket 0 00:05:22.376 EAL: Detected lcore 7 as core 8 on socket 0 00:05:22.376 EAL: Detected lcore 8 as core 9 on socket 0 00:05:22.376 EAL: Detected lcore 9 as core 10 on socket 0 00:05:22.376 EAL: Detected lcore 10 as core 11 on socket 0 00:05:22.376 EAL: Detected lcore 11 as core 12 on socket 0 00:05:22.376 EAL: Detected lcore 12 as core 13 on socket 0 00:05:22.376 EAL: Detected lcore 13 as core 14 on socket 0 00:05:22.376 EAL: Detected lcore 14 as core 16 on socket 0 00:05:22.376 EAL: Detected lcore 15 as core 17 on socket 0 00:05:22.376 EAL: Detected lcore 16 as core 18 on socket 0 00:05:22.376 EAL: Detected lcore 17 as core 19 on socket 0 00:05:22.376 EAL: Detected lcore 18 as core 20 on socket 0 00:05:22.376 EAL: Detected lcore 19 as core 21 on socket 0 00:05:22.376 EAL: Detected lcore 20 as core 22 on socket 0 00:05:22.376 EAL: Detected lcore 21 as core 24 on socket 0 00:05:22.376 EAL: Detected lcore 22 as core 25 on socket 0 00:05:22.376 EAL: Detected lcore 23 as core 26 on socket 0 00:05:22.376 EAL: Detected lcore 24 as core 27 on socket 0 00:05:22.376 EAL: Detected lcore 25 as core 28 on socket 0 00:05:22.376 EAL: Detected lcore 26 as core 29 on socket 0 00:05:22.376 EAL: Detected lcore 27 as core 30 on socket 0 00:05:22.376 EAL: Detected lcore 28 as core 0 on socket 1 00:05:22.376 EAL: Detected lcore 29 as core 1 on socket 1 00:05:22.376 EAL: Detected lcore 30 as core 2 on socket 1 00:05:22.376 EAL: Detected lcore 31 as core 3 on socket 1 00:05:22.376 EAL: Detected lcore 32 as core 4 on socket 1 00:05:22.376 EAL: Detected lcore 33 as core 5 on socket 1 00:05:22.376 EAL: Detected lcore 34 as core 6 on socket 1 00:05:22.376 EAL: Detected lcore 35 as core 8 on socket 1 00:05:22.376 EAL: Detected lcore 36 as core 9 on socket 1 00:05:22.376 EAL: Detected lcore 37 as core 10 on socket 1 00:05:22.376 EAL: Detected lcore 38 as core 11 on socket 1 00:05:22.376 EAL: Detected lcore 39 as core 12 on socket 1 00:05:22.376 EAL: Detected lcore 40 as core 13 on socket 1 00:05:22.376 EAL: Detected lcore 41 as core 14 on socket 1 00:05:22.376 EAL: Detected lcore 42 as core 16 on socket 1 00:05:22.376 EAL: Detected lcore 43 as core 17 on socket 1 00:05:22.376 EAL: Detected lcore 44 as core 18 on socket 1 00:05:22.376 EAL: Detected lcore 45 as core 19 on socket 1 00:05:22.376 EAL: Detected lcore 46 as core 20 on socket 1 00:05:22.376 EAL: Detected lcore 47 as core 21 on socket 1 00:05:22.376 EAL: Detected lcore 48 as core 22 on socket 1 00:05:22.376 EAL: Detected lcore 49 as core 24 on socket 1 00:05:22.376 EAL: Detected lcore 50 as core 25 on socket 1 00:05:22.376 EAL: Detected lcore 51 as core 26 on socket 1 00:05:22.376 EAL: Detected lcore 52 as core 27 on socket 1 00:05:22.376 EAL: Detected lcore 53 as core 28 on socket 1 00:05:22.376 EAL: Detected lcore 54 as core 29 on socket 1 00:05:22.376 EAL: Detected lcore 55 as core 30 on socket 1 00:05:22.376 EAL: Detected lcore 56 as core 0 on socket 0 00:05:22.376 EAL: Detected lcore 57 as core 1 on socket 0 00:05:22.376 EAL: Detected lcore 58 as core 2 on socket 0 00:05:22.376 EAL: Detected lcore 59 as core 3 on socket 0 00:05:22.376 EAL: Detected lcore 60 as core 4 on socket 0 00:05:22.376 EAL: Detected lcore 61 as core 5 on socket 0 00:05:22.376 EAL: Detected lcore 62 as core 6 on socket 0 00:05:22.376 EAL: Detected lcore 63 as core 8 on socket 0 00:05:22.376 EAL: Detected lcore 64 as core 9 on socket 0 00:05:22.376 EAL: Detected lcore 65 as core 10 on socket 0 00:05:22.376 EAL: Detected lcore 66 as core 11 on socket 0 00:05:22.376 EAL: Detected lcore 67 as core 12 on socket 0 00:05:22.376 EAL: Detected lcore 68 as core 13 on socket 0 00:05:22.376 EAL: Detected lcore 69 as core 14 on socket 0 00:05:22.376 EAL: Detected lcore 70 as core 16 on socket 0 00:05:22.376 EAL: Detected lcore 71 as core 17 on socket 0 00:05:22.376 EAL: Detected lcore 72 as core 18 on socket 0 00:05:22.376 EAL: Detected lcore 73 as core 19 on socket 0 00:05:22.376 EAL: Detected lcore 74 as core 20 on socket 0 00:05:22.376 EAL: Detected lcore 75 as core 21 on socket 0 00:05:22.376 EAL: Detected lcore 76 as core 22 on socket 0 00:05:22.376 EAL: Detected lcore 77 as core 24 on socket 0 00:05:22.376 EAL: Detected lcore 78 as core 25 on socket 0 00:05:22.376 EAL: Detected lcore 79 as core 26 on socket 0 00:05:22.376 EAL: Detected lcore 80 as core 27 on socket 0 00:05:22.376 EAL: Detected lcore 81 as core 28 on socket 0 00:05:22.376 EAL: Detected lcore 82 as core 29 on socket 0 00:05:22.376 EAL: Detected lcore 83 as core 30 on socket 0 00:05:22.376 EAL: Detected lcore 84 as core 0 on socket 1 00:05:22.376 EAL: Detected lcore 85 as core 1 on socket 1 00:05:22.376 EAL: Detected lcore 86 as core 2 on socket 1 00:05:22.376 EAL: Detected lcore 87 as core 3 on socket 1 00:05:22.376 EAL: Detected lcore 88 as core 4 on socket 1 00:05:22.376 EAL: Detected lcore 89 as core 5 on socket 1 00:05:22.376 EAL: Detected lcore 90 as core 6 on socket 1 00:05:22.376 EAL: Detected lcore 91 as core 8 on socket 1 00:05:22.376 EAL: Detected lcore 92 as core 9 on socket 1 00:05:22.376 EAL: Detected lcore 93 as core 10 on socket 1 00:05:22.376 EAL: Detected lcore 94 as core 11 on socket 1 00:05:22.376 EAL: Detected lcore 95 as core 12 on socket 1 00:05:22.376 EAL: Detected lcore 96 as core 13 on socket 1 00:05:22.376 EAL: Detected lcore 97 as core 14 on socket 1 00:05:22.376 EAL: Detected lcore 98 as core 16 on socket 1 00:05:22.376 EAL: Detected lcore 99 as core 17 on socket 1 00:05:22.376 EAL: Detected lcore 100 as core 18 on socket 1 00:05:22.376 EAL: Detected lcore 101 as core 19 on socket 1 00:05:22.376 EAL: Detected lcore 102 as core 20 on socket 1 00:05:22.376 EAL: Detected lcore 103 as core 21 on socket 1 00:05:22.376 EAL: Detected lcore 104 as core 22 on socket 1 00:05:22.376 EAL: Detected lcore 105 as core 24 on socket 1 00:05:22.376 EAL: Detected lcore 106 as core 25 on socket 1 00:05:22.376 EAL: Detected lcore 107 as core 26 on socket 1 00:05:22.376 EAL: Detected lcore 108 as core 27 on socket 1 00:05:22.376 EAL: Detected lcore 109 as core 28 on socket 1 00:05:22.376 EAL: Detected lcore 110 as core 29 on socket 1 00:05:22.376 EAL: Detected lcore 111 as core 30 on socket 1 00:05:22.376 EAL: Maximum logical cores by configuration: 128 00:05:22.376 EAL: Detected CPU lcores: 112 00:05:22.376 EAL: Detected NUMA nodes: 2 00:05:22.376 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:22.376 EAL: Detected shared linkage of DPDK 00:05:22.376 EAL: No shared files mode enabled, IPC will be disabled 00:05:22.376 EAL: No shared files mode enabled, IPC is disabled 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:22.377 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:22.377 EAL: Bus pci wants IOVA as 'PA' 00:05:22.377 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:22.377 EAL: Bus vdev wants IOVA as 'DC' 00:05:22.377 EAL: Selected IOVA mode 'PA' 00:05:22.377 EAL: Probing VFIO support... 00:05:22.377 EAL: IOMMU type 1 (Type 1) is supported 00:05:22.377 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:22.377 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:22.377 EAL: VFIO support initialized 00:05:22.377 EAL: Ask a virtual area of 0x2e000 bytes 00:05:22.377 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:22.377 EAL: Setting up physically contiguous memory... 00:05:22.377 EAL: Setting maximum number of open files to 524288 00:05:22.377 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:22.377 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:22.377 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:22.377 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:22.377 EAL: Ask a virtual area of 0x61000 bytes 00:05:22.377 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:22.377 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:22.377 EAL: Ask a virtual area of 0x400000000 bytes 00:05:22.377 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:22.377 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:22.377 EAL: Hugepages will be freed exactly as allocated. 00:05:22.377 EAL: No shared files mode enabled, IPC is disabled 00:05:22.377 EAL: No shared files mode enabled, IPC is disabled 00:05:22.377 EAL: TSC frequency is ~2500000 KHz 00:05:22.377 EAL: Main lcore 0 is ready (tid=7f471b1aab00;cpuset=[0]) 00:05:22.377 EAL: Trying to obtain current memory policy. 00:05:22.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.377 EAL: Restoring previous memory policy: 0 00:05:22.377 EAL: request: mp_malloc_sync 00:05:22.377 EAL: No shared files mode enabled, IPC is disabled 00:05:22.377 EAL: Heap on socket 0 was expanded by 2MB 00:05:22.377 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:05:22.377 EAL: probe driver: 8086:37c9 qat 00:05:22.377 EAL: PCI memory mapped at 0x202001000000 00:05:22.377 EAL: PCI memory mapped at 0x202001001000 00:05:22.377 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:22.377 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:05:22.377 EAL: probe driver: 8086:37c9 qat 00:05:22.377 EAL: PCI memory mapped at 0x202001002000 00:05:22.377 EAL: PCI memory mapped at 0x202001003000 00:05:22.377 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:22.377 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:05:22.377 EAL: probe driver: 8086:37c9 qat 00:05:22.377 EAL: PCI memory mapped at 0x202001004000 00:05:22.377 EAL: PCI memory mapped at 0x202001005000 00:05:22.377 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:22.377 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001006000 00:05:22.378 EAL: PCI memory mapped at 0x202001007000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001008000 00:05:22.378 EAL: PCI memory mapped at 0x202001009000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200100a000 00:05:22.378 EAL: PCI memory mapped at 0x20200100b000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200100c000 00:05:22.378 EAL: PCI memory mapped at 0x20200100d000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200100e000 00:05:22.378 EAL: PCI memory mapped at 0x20200100f000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001010000 00:05:22.378 EAL: PCI memory mapped at 0x202001011000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001012000 00:05:22.378 EAL: PCI memory mapped at 0x202001013000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001014000 00:05:22.378 EAL: PCI memory mapped at 0x202001015000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001016000 00:05:22.378 EAL: PCI memory mapped at 0x202001017000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001018000 00:05:22.378 EAL: PCI memory mapped at 0x202001019000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200101a000 00:05:22.378 EAL: PCI memory mapped at 0x20200101b000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200101c000 00:05:22.378 EAL: PCI memory mapped at 0x20200101d000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:22.378 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200101e000 00:05:22.378 EAL: PCI memory mapped at 0x20200101f000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001020000 00:05:22.378 EAL: PCI memory mapped at 0x202001021000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001022000 00:05:22.378 EAL: PCI memory mapped at 0x202001023000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001024000 00:05:22.378 EAL: PCI memory mapped at 0x202001025000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001026000 00:05:22.378 EAL: PCI memory mapped at 0x202001027000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001028000 00:05:22.378 EAL: PCI memory mapped at 0x202001029000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200102a000 00:05:22.378 EAL: PCI memory mapped at 0x20200102b000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200102c000 00:05:22.378 EAL: PCI memory mapped at 0x20200102d000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200102e000 00:05:22.378 EAL: PCI memory mapped at 0x20200102f000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001030000 00:05:22.378 EAL: PCI memory mapped at 0x202001031000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001032000 00:05:22.378 EAL: PCI memory mapped at 0x202001033000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001034000 00:05:22.378 EAL: PCI memory mapped at 0x202001035000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001036000 00:05:22.378 EAL: PCI memory mapped at 0x202001037000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001038000 00:05:22.378 EAL: PCI memory mapped at 0x202001039000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200103a000 00:05:22.378 EAL: PCI memory mapped at 0x20200103b000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200103c000 00:05:22.378 EAL: PCI memory mapped at 0x20200103d000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:22.378 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200103e000 00:05:22.378 EAL: PCI memory mapped at 0x20200103f000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001040000 00:05:22.378 EAL: PCI memory mapped at 0x202001041000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001042000 00:05:22.378 EAL: PCI memory mapped at 0x202001043000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001044000 00:05:22.378 EAL: PCI memory mapped at 0x202001045000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001046000 00:05:22.378 EAL: PCI memory mapped at 0x202001047000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001048000 00:05:22.378 EAL: PCI memory mapped at 0x202001049000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200104a000 00:05:22.378 EAL: PCI memory mapped at 0x20200104b000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200104c000 00:05:22.378 EAL: PCI memory mapped at 0x20200104d000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x20200104e000 00:05:22.378 EAL: PCI memory mapped at 0x20200104f000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001050000 00:05:22.378 EAL: PCI memory mapped at 0x202001051000 00:05:22.378 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:22.378 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:05:22.378 EAL: probe driver: 8086:37c9 qat 00:05:22.378 EAL: PCI memory mapped at 0x202001052000 00:05:22.379 EAL: PCI memory mapped at 0x202001053000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:22.379 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001054000 00:05:22.379 EAL: PCI memory mapped at 0x202001055000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:22.379 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001056000 00:05:22.379 EAL: PCI memory mapped at 0x202001057000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:22.379 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001058000 00:05:22.379 EAL: PCI memory mapped at 0x202001059000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:22.379 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200105a000 00:05:22.379 EAL: PCI memory mapped at 0x20200105b000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:22.379 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200105c000 00:05:22.379 EAL: PCI memory mapped at 0x20200105d000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:22.379 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200105e000 00:05:22.379 EAL: PCI memory mapped at 0x20200105f000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:22.379 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001060000 00:05:22.379 EAL: PCI memory mapped at 0x202001061000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001060000 00:05:22.379 EAL: PCI memory unmapped at 0x202001061000 00:05:22.379 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001062000 00:05:22.379 EAL: PCI memory mapped at 0x202001063000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001062000 00:05:22.379 EAL: PCI memory unmapped at 0x202001063000 00:05:22.379 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001064000 00:05:22.379 EAL: PCI memory mapped at 0x202001065000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001064000 00:05:22.379 EAL: PCI memory unmapped at 0x202001065000 00:05:22.379 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001066000 00:05:22.379 EAL: PCI memory mapped at 0x202001067000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001066000 00:05:22.379 EAL: PCI memory unmapped at 0x202001067000 00:05:22.379 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001068000 00:05:22.379 EAL: PCI memory mapped at 0x202001069000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001068000 00:05:22.379 EAL: PCI memory unmapped at 0x202001069000 00:05:22.379 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200106a000 00:05:22.379 EAL: PCI memory mapped at 0x20200106b000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x20200106a000 00:05:22.379 EAL: PCI memory unmapped at 0x20200106b000 00:05:22.379 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200106c000 00:05:22.379 EAL: PCI memory mapped at 0x20200106d000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x20200106c000 00:05:22.379 EAL: PCI memory unmapped at 0x20200106d000 00:05:22.379 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200106e000 00:05:22.379 EAL: PCI memory mapped at 0x20200106f000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x20200106e000 00:05:22.379 EAL: PCI memory unmapped at 0x20200106f000 00:05:22.379 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001070000 00:05:22.379 EAL: PCI memory mapped at 0x202001071000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001070000 00:05:22.379 EAL: PCI memory unmapped at 0x202001071000 00:05:22.379 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001072000 00:05:22.379 EAL: PCI memory mapped at 0x202001073000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001072000 00:05:22.379 EAL: PCI memory unmapped at 0x202001073000 00:05:22.379 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001074000 00:05:22.379 EAL: PCI memory mapped at 0x202001075000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001074000 00:05:22.379 EAL: PCI memory unmapped at 0x202001075000 00:05:22.379 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001076000 00:05:22.379 EAL: PCI memory mapped at 0x202001077000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001076000 00:05:22.379 EAL: PCI memory unmapped at 0x202001077000 00:05:22.379 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001078000 00:05:22.379 EAL: PCI memory mapped at 0x202001079000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001078000 00:05:22.379 EAL: PCI memory unmapped at 0x202001079000 00:05:22.379 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200107a000 00:05:22.379 EAL: PCI memory mapped at 0x20200107b000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x20200107a000 00:05:22.379 EAL: PCI memory unmapped at 0x20200107b000 00:05:22.379 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200107c000 00:05:22.379 EAL: PCI memory mapped at 0x20200107d000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x20200107c000 00:05:22.379 EAL: PCI memory unmapped at 0x20200107d000 00:05:22.379 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:22.379 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x20200107e000 00:05:22.379 EAL: PCI memory mapped at 0x20200107f000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x20200107e000 00:05:22.379 EAL: PCI memory unmapped at 0x20200107f000 00:05:22.379 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:22.379 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001080000 00:05:22.379 EAL: PCI memory mapped at 0x202001081000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.379 EAL: PCI memory unmapped at 0x202001080000 00:05:22.379 EAL: PCI memory unmapped at 0x202001081000 00:05:22.379 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:22.379 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:22.379 EAL: probe driver: 8086:37c9 qat 00:05:22.379 EAL: PCI memory mapped at 0x202001082000 00:05:22.379 EAL: PCI memory mapped at 0x202001083000 00:05:22.379 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:22.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001082000 00:05:22.380 EAL: PCI memory unmapped at 0x202001083000 00:05:22.380 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001084000 00:05:22.380 EAL: PCI memory mapped at 0x202001085000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001084000 00:05:22.380 EAL: PCI memory unmapped at 0x202001085000 00:05:22.380 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001086000 00:05:22.380 EAL: PCI memory mapped at 0x202001087000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001086000 00:05:22.380 EAL: PCI memory unmapped at 0x202001087000 00:05:22.380 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001088000 00:05:22.380 EAL: PCI memory mapped at 0x202001089000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001088000 00:05:22.380 EAL: PCI memory unmapped at 0x202001089000 00:05:22.380 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x20200108a000 00:05:22.380 EAL: PCI memory mapped at 0x20200108b000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x20200108a000 00:05:22.380 EAL: PCI memory unmapped at 0x20200108b000 00:05:22.380 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x20200108c000 00:05:22.380 EAL: PCI memory mapped at 0x20200108d000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x20200108c000 00:05:22.380 EAL: PCI memory unmapped at 0x20200108d000 00:05:22.380 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x20200108e000 00:05:22.380 EAL: PCI memory mapped at 0x20200108f000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x20200108e000 00:05:22.380 EAL: PCI memory unmapped at 0x20200108f000 00:05:22.380 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001090000 00:05:22.380 EAL: PCI memory mapped at 0x202001091000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001090000 00:05:22.380 EAL: PCI memory unmapped at 0x202001091000 00:05:22.380 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001092000 00:05:22.380 EAL: PCI memory mapped at 0x202001093000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001092000 00:05:22.380 EAL: PCI memory unmapped at 0x202001093000 00:05:22.380 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001094000 00:05:22.380 EAL: PCI memory mapped at 0x202001095000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001094000 00:05:22.380 EAL: PCI memory unmapped at 0x202001095000 00:05:22.380 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001096000 00:05:22.380 EAL: PCI memory mapped at 0x202001097000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001096000 00:05:22.380 EAL: PCI memory unmapped at 0x202001097000 00:05:22.380 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x202001098000 00:05:22.380 EAL: PCI memory mapped at 0x202001099000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x202001098000 00:05:22.380 EAL: PCI memory unmapped at 0x202001099000 00:05:22.380 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x20200109a000 00:05:22.380 EAL: PCI memory mapped at 0x20200109b000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x20200109a000 00:05:22.380 EAL: PCI memory unmapped at 0x20200109b000 00:05:22.380 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x20200109c000 00:05:22.380 EAL: PCI memory mapped at 0x20200109d000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x20200109c000 00:05:22.380 EAL: PCI memory unmapped at 0x20200109d000 00:05:22.380 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:22.380 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:22.380 EAL: probe driver: 8086:37c9 qat 00:05:22.380 EAL: PCI memory mapped at 0x20200109e000 00:05:22.380 EAL: PCI memory mapped at 0x20200109f000 00:05:22.380 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:22.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:22.380 EAL: PCI memory unmapped at 0x20200109e000 00:05:22.380 EAL: PCI memory unmapped at 0x20200109f000 00:05:22.380 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:22.380 EAL: Mem event callback 'spdk:(nil)' registered 00:05:22.380 00:05:22.380 00:05:22.380 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.380 http://cunit.sourceforge.net/ 00:05:22.380 00:05:22.380 00:05:22.380 Suite: components_suite 00:05:22.380 Test: vtophys_malloc_test ...passed 00:05:22.380 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:22.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.380 EAL: Restoring previous memory policy: 4 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was expanded by 4MB 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was shrunk by 4MB 00:05:22.380 EAL: Trying to obtain current memory policy. 00:05:22.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.380 EAL: Restoring previous memory policy: 4 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was expanded by 6MB 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was shrunk by 6MB 00:05:22.380 EAL: Trying to obtain current memory policy. 00:05:22.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.380 EAL: Restoring previous memory policy: 4 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was expanded by 10MB 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was shrunk by 10MB 00:05:22.380 EAL: Trying to obtain current memory policy. 00:05:22.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.380 EAL: Restoring previous memory policy: 4 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was expanded by 18MB 00:05:22.380 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.380 EAL: request: mp_malloc_sync 00:05:22.380 EAL: No shared files mode enabled, IPC is disabled 00:05:22.380 EAL: Heap on socket 0 was shrunk by 18MB 00:05:22.381 EAL: Trying to obtain current memory policy. 00:05:22.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.381 EAL: Restoring previous memory policy: 4 00:05:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.381 EAL: request: mp_malloc_sync 00:05:22.381 EAL: No shared files mode enabled, IPC is disabled 00:05:22.381 EAL: Heap on socket 0 was expanded by 34MB 00:05:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.381 EAL: request: mp_malloc_sync 00:05:22.381 EAL: No shared files mode enabled, IPC is disabled 00:05:22.381 EAL: Heap on socket 0 was shrunk by 34MB 00:05:22.381 EAL: Trying to obtain current memory policy. 00:05:22.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.381 EAL: Restoring previous memory policy: 4 00:05:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.381 EAL: request: mp_malloc_sync 00:05:22.381 EAL: No shared files mode enabled, IPC is disabled 00:05:22.381 EAL: Heap on socket 0 was expanded by 66MB 00:05:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.381 EAL: request: mp_malloc_sync 00:05:22.381 EAL: No shared files mode enabled, IPC is disabled 00:05:22.381 EAL: Heap on socket 0 was shrunk by 66MB 00:05:22.381 EAL: Trying to obtain current memory policy. 00:05:22.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.381 EAL: Restoring previous memory policy: 4 00:05:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.381 EAL: request: mp_malloc_sync 00:05:22.381 EAL: No shared files mode enabled, IPC is disabled 00:05:22.381 EAL: Heap on socket 0 was expanded by 130MB 00:05:22.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.381 EAL: request: mp_malloc_sync 00:05:22.381 EAL: No shared files mode enabled, IPC is disabled 00:05:22.381 EAL: Heap on socket 0 was shrunk by 130MB 00:05:22.381 EAL: Trying to obtain current memory policy. 00:05:22.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.639 EAL: Restoring previous memory policy: 4 00:05:22.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.639 EAL: request: mp_malloc_sync 00:05:22.639 EAL: No shared files mode enabled, IPC is disabled 00:05:22.639 EAL: Heap on socket 0 was expanded by 258MB 00:05:22.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.639 EAL: request: mp_malloc_sync 00:05:22.639 EAL: No shared files mode enabled, IPC is disabled 00:05:22.639 EAL: Heap on socket 0 was shrunk by 258MB 00:05:22.639 EAL: Trying to obtain current memory policy. 00:05:22.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.639 EAL: Restoring previous memory policy: 4 00:05:22.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.639 EAL: request: mp_malloc_sync 00:05:22.639 EAL: No shared files mode enabled, IPC is disabled 00:05:22.639 EAL: Heap on socket 0 was expanded by 514MB 00:05:22.897 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.897 EAL: request: mp_malloc_sync 00:05:22.897 EAL: No shared files mode enabled, IPC is disabled 00:05:22.897 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.897 EAL: Trying to obtain current memory policy. 00:05:22.897 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.156 EAL: Restoring previous memory policy: 4 00:05:23.156 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.156 EAL: request: mp_malloc_sync 00:05:23.156 EAL: No shared files mode enabled, IPC is disabled 00:05:23.156 EAL: Heap on socket 0 was expanded by 1026MB 00:05:23.156 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.414 EAL: request: mp_malloc_sync 00:05:23.414 EAL: No shared files mode enabled, IPC is disabled 00:05:23.414 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.414 passed 00:05:23.414 00:05:23.414 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.414 suites 1 1 n/a 0 0 00:05:23.414 tests 2 2 2 0 0 00:05:23.414 asserts 6485 6485 6485 0 n/a 00:05:23.414 00:05:23.414 Elapsed time = 0.964 seconds 00:05:23.414 EAL: No shared files mode enabled, IPC is disabled 00:05:23.414 EAL: No shared files mode enabled, IPC is disabled 00:05:23.414 EAL: No shared files mode enabled, IPC is disabled 00:05:23.414 00:05:23.414 real 0m1.138s 00:05:23.414 user 0m0.643s 00:05:23.414 sys 0m0.450s 00:05:23.414 10:12:48 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.414 10:12:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:23.414 ************************************ 00:05:23.414 END TEST env_vtophys 00:05:23.414 ************************************ 00:05:23.414 10:12:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.414 10:12:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.414 10:12:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.414 10:12:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.414 10:12:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.414 ************************************ 00:05:23.414 START TEST env_pci 00:05:23.414 ************************************ 00:05:23.414 10:12:48 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:23.414 00:05:23.414 00:05:23.414 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.414 http://cunit.sourceforge.net/ 00:05:23.414 00:05:23.414 00:05:23.414 Suite: pci 00:05:23.414 Test: pci_hook ...[2024-07-15 10:12:48.134056] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1695318 has claimed it 00:05:23.414 EAL: Cannot find device (10000:00:01.0) 00:05:23.414 EAL: Failed to attach device on primary process 00:05:23.414 passed 00:05:23.414 00:05:23.414 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.414 suites 1 1 n/a 0 0 00:05:23.414 tests 1 1 1 0 0 00:05:23.414 asserts 25 25 25 0 n/a 00:05:23.414 00:05:23.414 Elapsed time = 0.039 seconds 00:05:23.414 00:05:23.414 real 0m0.068s 00:05:23.414 user 0m0.021s 00:05:23.414 sys 0m0.047s 00:05:23.414 10:12:48 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.414 10:12:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:23.414 ************************************ 00:05:23.414 END TEST env_pci 00:05:23.414 ************************************ 00:05:23.674 10:12:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:23.674 10:12:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.674 10:12:48 env -- env/env.sh@15 -- # uname 00:05:23.674 10:12:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.674 10:12:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.675 10:12:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.675 10:12:48 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:23.675 10:12:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.675 10:12:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:23.675 ************************************ 00:05:23.675 START TEST env_dpdk_post_init 00:05:23.675 ************************************ 00:05:23.675 10:12:48 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.675 EAL: Detected CPU lcores: 112 00:05:23.675 EAL: Detected NUMA nodes: 2 00:05:23.675 EAL: Detected shared linkage of DPDK 00:05:23.675 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.675 EAL: Selected IOVA mode 'PA' 00:05:23.675 EAL: VFIO support initialized 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.675 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.675 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:23.675 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:23.676 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:23.676 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.676 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:23.676 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:23.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:23.677 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:23.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.677 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:23.677 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.677 EAL: Using IOMMU type 1 (Type 1) 00:05:23.677 EAL: Ignore mapping IO port bar(1) 00:05:23.677 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:23.677 EAL: Ignore mapping IO port bar(1) 00:05:23.677 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:23.677 EAL: Ignore mapping IO port bar(1) 00:05:23.677 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:23.677 EAL: Ignore mapping IO port bar(1) 00:05:23.677 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:23.937 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:23.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:23.937 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.937 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:23.937 EAL: Ignore mapping IO port bar(1) 00:05:23.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:23.938 EAL: Ignore mapping IO port bar(1) 00:05:23.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:23.938 EAL: Ignore mapping IO port bar(1) 00:05:23.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:23.938 EAL: Ignore mapping IO port bar(1) 00:05:23.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:23.938 EAL: Ignore mapping IO port bar(1) 00:05:23.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:24.875 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:29.091 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:29.092 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:05:29.092 Starting DPDK initialization... 00:05:29.092 Starting SPDK post initialization... 00:05:29.092 SPDK NVMe probe 00:05:29.092 Attaching to 0000:d8:00.0 00:05:29.092 Attached to 0000:d8:00.0 00:05:29.092 Cleaning up... 00:05:29.092 00:05:29.092 real 0m5.362s 00:05:29.092 user 0m3.961s 00:05:29.092 sys 0m0.453s 00:05:29.092 10:12:53 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.092 10:12:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.092 ************************************ 00:05:29.092 END TEST env_dpdk_post_init 00:05:29.092 ************************************ 00:05:29.092 10:12:53 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.092 10:12:53 env -- env/env.sh@26 -- # uname 00:05:29.092 10:12:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.092 10:12:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.092 10:12:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.092 10:12:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.092 10:12:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.092 ************************************ 00:05:29.092 START TEST env_mem_callbacks 00:05:29.092 ************************************ 00:05:29.092 10:12:53 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.092 EAL: Detected CPU lcores: 112 00:05:29.092 EAL: Detected NUMA nodes: 2 00:05:29.092 EAL: Detected shared linkage of DPDK 00:05:29.092 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.092 EAL: Selected IOVA mode 'PA' 00:05:29.092 EAL: VFIO support initialized 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.092 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:05:29.092 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.092 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:29.093 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:05:29.093 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.093 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:29.093 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:29.093 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:29.094 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:29.094 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.094 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:29.094 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.094 00:05:29.094 00:05:29.094 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.094 http://cunit.sourceforge.net/ 00:05:29.094 00:05:29.094 00:05:29.094 Suite: memory 00:05:29.094 Test: test ... 00:05:29.094 register 0x200000200000 2097152 00:05:29.094 malloc 3145728 00:05:29.094 register 0x200000400000 4194304 00:05:29.094 buf 0x200000500000 len 3145728 PASSED 00:05:29.094 malloc 64 00:05:29.094 buf 0x2000004fff40 len 64 PASSED 00:05:29.094 malloc 4194304 00:05:29.094 register 0x200000800000 6291456 00:05:29.094 buf 0x200000a00000 len 4194304 PASSED 00:05:29.094 free 0x200000500000 3145728 00:05:29.094 free 0x2000004fff40 64 00:05:29.094 unregister 0x200000400000 4194304 PASSED 00:05:29.094 free 0x200000a00000 4194304 00:05:29.094 unregister 0x200000800000 6291456 PASSED 00:05:29.094 malloc 8388608 00:05:29.094 register 0x200000400000 10485760 00:05:29.094 buf 0x200000600000 len 8388608 PASSED 00:05:29.094 free 0x200000600000 8388608 00:05:29.094 unregister 0x200000400000 10485760 PASSED 00:05:29.094 passed 00:05:29.094 00:05:29.094 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.094 suites 1 1 n/a 0 0 00:05:29.094 tests 1 1 1 0 0 00:05:29.094 asserts 15 15 15 0 n/a 00:05:29.094 00:05:29.094 Elapsed time = 0.004 seconds 00:05:29.094 00:05:29.094 real 0m0.085s 00:05:29.094 user 0m0.026s 00:05:29.094 sys 0m0.058s 00:05:29.094 10:12:53 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.094 10:12:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:29.094 ************************************ 00:05:29.094 END TEST env_mem_callbacks 00:05:29.094 ************************************ 00:05:29.094 10:12:53 env -- common/autotest_common.sh@1142 -- # return 0 00:05:29.094 00:05:29.094 real 0m7.254s 00:05:29.094 user 0m4.947s 00:05:29.094 sys 0m1.344s 00:05:29.094 10:12:53 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.094 10:12:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.094 ************************************ 00:05:29.094 END TEST env 00:05:29.094 ************************************ 00:05:29.094 10:12:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.094 10:12:53 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.094 10:12:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.094 10:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.094 10:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:29.359 ************************************ 00:05:29.359 START TEST rpc 00:05:29.359 ************************************ 00:05:29.359 10:12:53 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:29.359 * Looking for test storage... 00:05:29.359 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:29.359 10:12:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1696492 00:05:29.359 10:12:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:29.359 10:12:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.359 10:12:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1696492 00:05:29.359 10:12:54 rpc -- common/autotest_common.sh@829 -- # '[' -z 1696492 ']' 00:05:29.359 10:12:54 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.359 10:12:54 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.359 10:12:54 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.359 10:12:54 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.359 10:12:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.359 [2024-07-15 10:12:54.043519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:29.359 [2024-07-15 10:12:54.043567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696492 ] 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:29.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:29.359 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:29.359 [2024-07-15 10:12:54.135259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.617 [2024-07-15 10:12:54.210350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.617 [2024-07-15 10:12:54.210390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1696492' to capture a snapshot of events at runtime. 00:05:29.617 [2024-07-15 10:12:54.210399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.617 [2024-07-15 10:12:54.210407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.617 [2024-07-15 10:12:54.210415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1696492 for offline analysis/debug. 00:05:29.617 [2024-07-15 10:12:54.210435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.184 10:12:54 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.184 10:12:54 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.184 10:12:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:30.184 10:12:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:30.184 10:12:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.184 10:12:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.184 10:12:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.184 10:12:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.184 10:12:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.184 ************************************ 00:05:30.184 START TEST rpc_integrity 00:05:30.184 ************************************ 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.184 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.184 { 00:05:30.184 "name": "Malloc0", 00:05:30.184 "aliases": [ 00:05:30.184 "9eb58a31-4c1b-4cc9-9374-a785d39a48a8" 00:05:30.184 ], 00:05:30.184 "product_name": "Malloc disk", 00:05:30.184 "block_size": 512, 00:05:30.184 "num_blocks": 16384, 00:05:30.184 "uuid": "9eb58a31-4c1b-4cc9-9374-a785d39a48a8", 00:05:30.184 "assigned_rate_limits": { 00:05:30.184 "rw_ios_per_sec": 0, 00:05:30.184 "rw_mbytes_per_sec": 0, 00:05:30.184 "r_mbytes_per_sec": 0, 00:05:30.184 "w_mbytes_per_sec": 0 00:05:30.184 }, 00:05:30.184 "claimed": false, 00:05:30.184 "zoned": false, 00:05:30.184 "supported_io_types": { 00:05:30.184 "read": true, 00:05:30.184 "write": true, 00:05:30.184 "unmap": true, 00:05:30.184 "flush": true, 00:05:30.184 "reset": true, 00:05:30.184 "nvme_admin": false, 00:05:30.184 "nvme_io": false, 00:05:30.184 "nvme_io_md": false, 00:05:30.184 "write_zeroes": true, 00:05:30.184 "zcopy": true, 00:05:30.184 "get_zone_info": false, 00:05:30.184 "zone_management": false, 00:05:30.184 "zone_append": false, 00:05:30.184 "compare": false, 00:05:30.184 "compare_and_write": false, 00:05:30.184 "abort": true, 00:05:30.184 "seek_hole": false, 00:05:30.184 "seek_data": false, 00:05:30.184 "copy": true, 00:05:30.184 "nvme_iov_md": false 00:05:30.184 }, 00:05:30.184 "memory_domains": [ 00:05:30.184 { 00:05:30.184 "dma_device_id": "system", 00:05:30.184 "dma_device_type": 1 00:05:30.184 }, 00:05:30.184 { 00:05:30.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.184 "dma_device_type": 2 00:05:30.184 } 00:05:30.184 ], 00:05:30.184 "driver_specific": {} 00:05:30.184 } 00:05:30.184 ]' 00:05:30.184 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:30.441 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.441 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.441 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 [2024-07-15 10:12:54.993778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.441 [2024-07-15 10:12:54.993806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.441 [2024-07-15 10:12:54.993819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x88f5f0 00:05:30.441 [2024-07-15 10:12:54.993827] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.441 [2024-07-15 10:12:54.994923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.441 [2024-07-15 10:12:54.994945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.441 Passthru0 00:05:30.441 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.441 10:12:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.441 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.441 { 00:05:30.441 "name": "Malloc0", 00:05:30.441 "aliases": [ 00:05:30.441 "9eb58a31-4c1b-4cc9-9374-a785d39a48a8" 00:05:30.441 ], 00:05:30.441 "product_name": "Malloc disk", 00:05:30.441 "block_size": 512, 00:05:30.441 "num_blocks": 16384, 00:05:30.441 "uuid": "9eb58a31-4c1b-4cc9-9374-a785d39a48a8", 00:05:30.441 "assigned_rate_limits": { 00:05:30.441 "rw_ios_per_sec": 0, 00:05:30.441 "rw_mbytes_per_sec": 0, 00:05:30.441 "r_mbytes_per_sec": 0, 00:05:30.441 "w_mbytes_per_sec": 0 00:05:30.441 }, 00:05:30.441 "claimed": true, 00:05:30.441 "claim_type": "exclusive_write", 00:05:30.441 "zoned": false, 00:05:30.441 "supported_io_types": { 00:05:30.441 "read": true, 00:05:30.441 "write": true, 00:05:30.441 "unmap": true, 00:05:30.441 "flush": true, 00:05:30.441 "reset": true, 00:05:30.441 "nvme_admin": false, 00:05:30.441 "nvme_io": false, 00:05:30.441 "nvme_io_md": false, 00:05:30.441 "write_zeroes": true, 00:05:30.441 "zcopy": true, 00:05:30.441 "get_zone_info": false, 00:05:30.441 "zone_management": false, 00:05:30.441 "zone_append": false, 00:05:30.441 "compare": false, 00:05:30.441 "compare_and_write": false, 00:05:30.441 "abort": true, 00:05:30.441 "seek_hole": false, 00:05:30.441 "seek_data": false, 00:05:30.441 "copy": true, 00:05:30.441 "nvme_iov_md": false 00:05:30.441 }, 00:05:30.441 "memory_domains": [ 00:05:30.441 { 00:05:30.441 "dma_device_id": "system", 00:05:30.441 "dma_device_type": 1 00:05:30.441 }, 00:05:30.441 { 00:05:30.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.441 "dma_device_type": 2 00:05:30.441 } 00:05:30.441 ], 00:05:30.441 "driver_specific": {} 00:05:30.441 }, 00:05:30.441 { 00:05:30.441 "name": "Passthru0", 00:05:30.441 "aliases": [ 00:05:30.441 "803dd0da-e475-540d-8e51-95c84b1820cc" 00:05:30.441 ], 00:05:30.441 "product_name": "passthru", 00:05:30.441 "block_size": 512, 00:05:30.441 "num_blocks": 16384, 00:05:30.441 "uuid": "803dd0da-e475-540d-8e51-95c84b1820cc", 00:05:30.441 "assigned_rate_limits": { 00:05:30.441 "rw_ios_per_sec": 0, 00:05:30.441 "rw_mbytes_per_sec": 0, 00:05:30.441 "r_mbytes_per_sec": 0, 00:05:30.441 "w_mbytes_per_sec": 0 00:05:30.441 }, 00:05:30.441 "claimed": false, 00:05:30.441 "zoned": false, 00:05:30.441 "supported_io_types": { 00:05:30.441 "read": true, 00:05:30.441 "write": true, 00:05:30.441 "unmap": true, 00:05:30.441 "flush": true, 00:05:30.441 "reset": true, 00:05:30.441 "nvme_admin": false, 00:05:30.441 "nvme_io": false, 00:05:30.441 "nvme_io_md": false, 00:05:30.441 "write_zeroes": true, 00:05:30.441 "zcopy": true, 00:05:30.441 "get_zone_info": false, 00:05:30.441 "zone_management": false, 00:05:30.441 "zone_append": false, 00:05:30.441 "compare": false, 00:05:30.441 "compare_and_write": false, 00:05:30.441 "abort": true, 00:05:30.441 "seek_hole": false, 00:05:30.441 "seek_data": false, 00:05:30.441 "copy": true, 00:05:30.441 "nvme_iov_md": false 00:05:30.441 }, 00:05:30.441 "memory_domains": [ 00:05:30.441 { 00:05:30.441 "dma_device_id": "system", 00:05:30.441 "dma_device_type": 1 00:05:30.441 }, 00:05:30.441 { 00:05:30.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.441 "dma_device_type": 2 00:05:30.441 } 00:05:30.441 ], 00:05:30.441 "driver_specific": { 00:05:30.441 "passthru": { 00:05:30.441 "name": "Passthru0", 00:05:30.441 "base_bdev_name": "Malloc0" 00:05:30.441 } 00:05:30.441 } 00:05:30.441 } 00:05:30.441 ]' 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.441 10:12:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.441 00:05:30.441 real 0m0.276s 00:05:30.441 user 0m0.177s 00:05:30.441 sys 0m0.042s 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.441 10:12:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 ************************************ 00:05:30.441 END TEST rpc_integrity 00:05:30.441 ************************************ 00:05:30.441 10:12:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.441 10:12:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.441 10:12:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.441 10:12:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.441 10:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 ************************************ 00:05:30.441 START TEST rpc_plugins 00:05:30.441 ************************************ 00:05:30.441 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:30.441 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.441 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.441 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.441 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.441 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.441 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.698 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.698 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.698 { 00:05:30.698 "name": "Malloc1", 00:05:30.698 "aliases": [ 00:05:30.698 "9e37ee9d-2166-4abc-89a4-5728d1f16d45" 00:05:30.698 ], 00:05:30.698 "product_name": "Malloc disk", 00:05:30.698 "block_size": 4096, 00:05:30.698 "num_blocks": 256, 00:05:30.698 "uuid": "9e37ee9d-2166-4abc-89a4-5728d1f16d45", 00:05:30.698 "assigned_rate_limits": { 00:05:30.698 "rw_ios_per_sec": 0, 00:05:30.698 "rw_mbytes_per_sec": 0, 00:05:30.698 "r_mbytes_per_sec": 0, 00:05:30.698 "w_mbytes_per_sec": 0 00:05:30.698 }, 00:05:30.698 "claimed": false, 00:05:30.698 "zoned": false, 00:05:30.698 "supported_io_types": { 00:05:30.698 "read": true, 00:05:30.698 "write": true, 00:05:30.698 "unmap": true, 00:05:30.698 "flush": true, 00:05:30.698 "reset": true, 00:05:30.698 "nvme_admin": false, 00:05:30.698 "nvme_io": false, 00:05:30.698 "nvme_io_md": false, 00:05:30.698 "write_zeroes": true, 00:05:30.698 "zcopy": true, 00:05:30.698 "get_zone_info": false, 00:05:30.698 "zone_management": false, 00:05:30.698 "zone_append": false, 00:05:30.698 "compare": false, 00:05:30.698 "compare_and_write": false, 00:05:30.698 "abort": true, 00:05:30.699 "seek_hole": false, 00:05:30.699 "seek_data": false, 00:05:30.699 "copy": true, 00:05:30.699 "nvme_iov_md": false 00:05:30.699 }, 00:05:30.699 "memory_domains": [ 00:05:30.699 { 00:05:30.699 "dma_device_id": "system", 00:05:30.699 "dma_device_type": 1 00:05:30.699 }, 00:05:30.699 { 00:05:30.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.699 "dma_device_type": 2 00:05:30.699 } 00:05:30.699 ], 00:05:30.699 "driver_specific": {} 00:05:30.699 } 00:05:30.699 ]' 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:30.699 10:12:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:30.699 00:05:30.699 real 0m0.138s 00:05:30.699 user 0m0.085s 00:05:30.699 sys 0m0.021s 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.699 10:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 ************************************ 00:05:30.699 END TEST rpc_plugins 00:05:30.699 ************************************ 00:05:30.699 10:12:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.699 10:12:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:30.699 10:12:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.699 10:12:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.699 10:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 ************************************ 00:05:30.699 START TEST rpc_trace_cmd_test 00:05:30.699 ************************************ 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:30.699 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1696492", 00:05:30.699 "tpoint_group_mask": "0x8", 00:05:30.699 "iscsi_conn": { 00:05:30.699 "mask": "0x2", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "scsi": { 00:05:30.699 "mask": "0x4", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "bdev": { 00:05:30.699 "mask": "0x8", 00:05:30.699 "tpoint_mask": "0xffffffffffffffff" 00:05:30.699 }, 00:05:30.699 "nvmf_rdma": { 00:05:30.699 "mask": "0x10", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "nvmf_tcp": { 00:05:30.699 "mask": "0x20", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "ftl": { 00:05:30.699 "mask": "0x40", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "blobfs": { 00:05:30.699 "mask": "0x80", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "dsa": { 00:05:30.699 "mask": "0x200", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "thread": { 00:05:30.699 "mask": "0x400", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "nvme_pcie": { 00:05:30.699 "mask": "0x800", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "iaa": { 00:05:30.699 "mask": "0x1000", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "nvme_tcp": { 00:05:30.699 "mask": "0x2000", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "bdev_nvme": { 00:05:30.699 "mask": "0x4000", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 }, 00:05:30.699 "sock": { 00:05:30.699 "mask": "0x8000", 00:05:30.699 "tpoint_mask": "0x0" 00:05:30.699 } 00:05:30.699 }' 00:05:30.699 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:30.956 00:05:30.956 real 0m0.225s 00:05:30.956 user 0m0.189s 00:05:30.956 sys 0m0.030s 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.956 10:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:30.956 ************************************ 00:05:30.956 END TEST rpc_trace_cmd_test 00:05:30.956 ************************************ 00:05:30.956 10:12:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.956 10:12:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:30.956 10:12:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:30.956 10:12:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:30.956 10:12:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.956 10:12:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.956 10:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.956 ************************************ 00:05:30.956 START TEST rpc_daemon_integrity 00:05:30.956 ************************************ 00:05:30.956 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:30.956 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.956 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.956 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.956 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.956 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.214 { 00:05:31.214 "name": "Malloc2", 00:05:31.214 "aliases": [ 00:05:31.214 "88c9669d-fef7-4c58-ae24-674834aa7e99" 00:05:31.214 ], 00:05:31.214 "product_name": "Malloc disk", 00:05:31.214 "block_size": 512, 00:05:31.214 "num_blocks": 16384, 00:05:31.214 "uuid": "88c9669d-fef7-4c58-ae24-674834aa7e99", 00:05:31.214 "assigned_rate_limits": { 00:05:31.214 "rw_ios_per_sec": 0, 00:05:31.214 "rw_mbytes_per_sec": 0, 00:05:31.214 "r_mbytes_per_sec": 0, 00:05:31.214 "w_mbytes_per_sec": 0 00:05:31.214 }, 00:05:31.214 "claimed": false, 00:05:31.214 "zoned": false, 00:05:31.214 "supported_io_types": { 00:05:31.214 "read": true, 00:05:31.214 "write": true, 00:05:31.214 "unmap": true, 00:05:31.214 "flush": true, 00:05:31.214 "reset": true, 00:05:31.214 "nvme_admin": false, 00:05:31.214 "nvme_io": false, 00:05:31.214 "nvme_io_md": false, 00:05:31.214 "write_zeroes": true, 00:05:31.214 "zcopy": true, 00:05:31.214 "get_zone_info": false, 00:05:31.214 "zone_management": false, 00:05:31.214 "zone_append": false, 00:05:31.214 "compare": false, 00:05:31.214 "compare_and_write": false, 00:05:31.214 "abort": true, 00:05:31.214 "seek_hole": false, 00:05:31.214 "seek_data": false, 00:05:31.214 "copy": true, 00:05:31.214 "nvme_iov_md": false 00:05:31.214 }, 00:05:31.214 "memory_domains": [ 00:05:31.214 { 00:05:31.214 "dma_device_id": "system", 00:05:31.214 "dma_device_type": 1 00:05:31.214 }, 00:05:31.214 { 00:05:31.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.214 "dma_device_type": 2 00:05:31.214 } 00:05:31.214 ], 00:05:31.214 "driver_specific": {} 00:05:31.214 } 00:05:31.214 ]' 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.214 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.214 [2024-07-15 10:12:55.856107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.214 [2024-07-15 10:12:55.856133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.215 [2024-07-15 10:12:55.856146] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa3afb0 00:05:31.215 [2024-07-15 10:12:55.856154] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.215 [2024-07-15 10:12:55.857095] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.215 [2024-07-15 10:12:55.857116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.215 Passthru0 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.215 { 00:05:31.215 "name": "Malloc2", 00:05:31.215 "aliases": [ 00:05:31.215 "88c9669d-fef7-4c58-ae24-674834aa7e99" 00:05:31.215 ], 00:05:31.215 "product_name": "Malloc disk", 00:05:31.215 "block_size": 512, 00:05:31.215 "num_blocks": 16384, 00:05:31.215 "uuid": "88c9669d-fef7-4c58-ae24-674834aa7e99", 00:05:31.215 "assigned_rate_limits": { 00:05:31.215 "rw_ios_per_sec": 0, 00:05:31.215 "rw_mbytes_per_sec": 0, 00:05:31.215 "r_mbytes_per_sec": 0, 00:05:31.215 "w_mbytes_per_sec": 0 00:05:31.215 }, 00:05:31.215 "claimed": true, 00:05:31.215 "claim_type": "exclusive_write", 00:05:31.215 "zoned": false, 00:05:31.215 "supported_io_types": { 00:05:31.215 "read": true, 00:05:31.215 "write": true, 00:05:31.215 "unmap": true, 00:05:31.215 "flush": true, 00:05:31.215 "reset": true, 00:05:31.215 "nvme_admin": false, 00:05:31.215 "nvme_io": false, 00:05:31.215 "nvme_io_md": false, 00:05:31.215 "write_zeroes": true, 00:05:31.215 "zcopy": true, 00:05:31.215 "get_zone_info": false, 00:05:31.215 "zone_management": false, 00:05:31.215 "zone_append": false, 00:05:31.215 "compare": false, 00:05:31.215 "compare_and_write": false, 00:05:31.215 "abort": true, 00:05:31.215 "seek_hole": false, 00:05:31.215 "seek_data": false, 00:05:31.215 "copy": true, 00:05:31.215 "nvme_iov_md": false 00:05:31.215 }, 00:05:31.215 "memory_domains": [ 00:05:31.215 { 00:05:31.215 "dma_device_id": "system", 00:05:31.215 "dma_device_type": 1 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.215 "dma_device_type": 2 00:05:31.215 } 00:05:31.215 ], 00:05:31.215 "driver_specific": {} 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "name": "Passthru0", 00:05:31.215 "aliases": [ 00:05:31.215 "6d0d00b3-889a-5299-bd4a-56c85997ecb7" 00:05:31.215 ], 00:05:31.215 "product_name": "passthru", 00:05:31.215 "block_size": 512, 00:05:31.215 "num_blocks": 16384, 00:05:31.215 "uuid": "6d0d00b3-889a-5299-bd4a-56c85997ecb7", 00:05:31.215 "assigned_rate_limits": { 00:05:31.215 "rw_ios_per_sec": 0, 00:05:31.215 "rw_mbytes_per_sec": 0, 00:05:31.215 "r_mbytes_per_sec": 0, 00:05:31.215 "w_mbytes_per_sec": 0 00:05:31.215 }, 00:05:31.215 "claimed": false, 00:05:31.215 "zoned": false, 00:05:31.215 "supported_io_types": { 00:05:31.215 "read": true, 00:05:31.215 "write": true, 00:05:31.215 "unmap": true, 00:05:31.215 "flush": true, 00:05:31.215 "reset": true, 00:05:31.215 "nvme_admin": false, 00:05:31.215 "nvme_io": false, 00:05:31.215 "nvme_io_md": false, 00:05:31.215 "write_zeroes": true, 00:05:31.215 "zcopy": true, 00:05:31.215 "get_zone_info": false, 00:05:31.215 "zone_management": false, 00:05:31.215 "zone_append": false, 00:05:31.215 "compare": false, 00:05:31.215 "compare_and_write": false, 00:05:31.215 "abort": true, 00:05:31.215 "seek_hole": false, 00:05:31.215 "seek_data": false, 00:05:31.215 "copy": true, 00:05:31.215 "nvme_iov_md": false 00:05:31.215 }, 00:05:31.215 "memory_domains": [ 00:05:31.215 { 00:05:31.215 "dma_device_id": "system", 00:05:31.215 "dma_device_type": 1 00:05:31.215 }, 00:05:31.215 { 00:05:31.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.215 "dma_device_type": 2 00:05:31.215 } 00:05:31.215 ], 00:05:31.215 "driver_specific": { 00:05:31.215 "passthru": { 00:05:31.215 "name": "Passthru0", 00:05:31.215 "base_bdev_name": "Malloc2" 00:05:31.215 } 00:05:31.215 } 00:05:31.215 } 00:05:31.215 ]' 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.215 00:05:31.215 real 0m0.244s 00:05:31.215 user 0m0.152s 00:05:31.215 sys 0m0.037s 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.215 10:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.215 ************************************ 00:05:31.215 END TEST rpc_daemon_integrity 00:05:31.215 ************************************ 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.474 10:12:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.474 10:12:56 rpc -- rpc/rpc.sh@84 -- # killprocess 1696492 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@948 -- # '[' -z 1696492 ']' 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@952 -- # kill -0 1696492 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@953 -- # uname 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1696492 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1696492' 00:05:31.474 killing process with pid 1696492 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@967 -- # kill 1696492 00:05:31.474 10:12:56 rpc -- common/autotest_common.sh@972 -- # wait 1696492 00:05:31.732 00:05:31.732 real 0m2.482s 00:05:31.732 user 0m3.116s 00:05:31.732 sys 0m0.780s 00:05:31.732 10:12:56 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.732 10:12:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.732 ************************************ 00:05:31.732 END TEST rpc 00:05:31.732 ************************************ 00:05:31.732 10:12:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.732 10:12:56 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.732 10:12:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.732 10:12:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.732 10:12:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.732 ************************************ 00:05:31.732 START TEST skip_rpc 00:05:31.732 ************************************ 00:05:31.732 10:12:56 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:31.990 * Looking for test storage... 00:05:31.990 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:31.990 10:12:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:31.990 10:12:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:31.990 10:12:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:31.990 10:12:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.990 10:12:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.990 10:12:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.990 ************************************ 00:05:31.990 START TEST skip_rpc 00:05:31.990 ************************************ 00:05:31.990 10:12:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:31.990 10:12:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:31.990 10:12:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1696964 00:05:31.990 10:12:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.990 10:12:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:31.990 [2024-07-15 10:12:56.636268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:31.990 [2024-07-15 10:12:56.636314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696964 ] 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:31.990 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.990 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:31.991 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:31.991 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:31.991 [2024-07-15 10:12:56.725112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.250 [2024-07-15 10:12:56.795850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1696964 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1696964 ']' 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1696964 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1696964 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1696964' 00:05:37.519 killing process with pid 1696964 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1696964 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1696964 00:05:37.519 00:05:37.519 real 0m5.374s 00:05:37.519 user 0m5.115s 00:05:37.519 sys 0m0.291s 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.519 10:13:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.519 ************************************ 00:05:37.519 END TEST skip_rpc 00:05:37.519 ************************************ 00:05:37.519 10:13:02 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.519 10:13:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:37.519 10:13:02 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.519 10:13:02 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.519 10:13:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.519 ************************************ 00:05:37.519 START TEST skip_rpc_with_json 00:05:37.519 ************************************ 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1698020 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1698020 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1698020 ']' 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.519 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.519 [2024-07-15 10:13:02.098316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:37.519 [2024-07-15 10:13:02.098362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698020 ] 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:37.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:37.519 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:37.519 [2024-07-15 10:13:02.188751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.519 [2024-07-15 10:13:02.261586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.086 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.086 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:38.086 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.086 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.086 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.086 [2024-07-15 10:13:02.873594] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.346 request: 00:05:38.346 { 00:05:38.346 "trtype": "tcp", 00:05:38.346 "method": "nvmf_get_transports", 00:05:38.346 "req_id": 1 00:05:38.346 } 00:05:38.346 Got JSON-RPC error response 00:05:38.346 response: 00:05:38.346 { 00:05:38.346 "code": -19, 00:05:38.346 "message": "No such device" 00:05:38.346 } 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.346 [2024-07-15 10:13:02.881703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.346 10:13:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.346 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.346 10:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:38.346 { 00:05:38.346 "subsystems": [ 00:05:38.346 { 00:05:38.346 "subsystem": "keyring", 00:05:38.346 "config": [] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "iobuf", 00:05:38.346 "config": [ 00:05:38.346 { 00:05:38.346 "method": "iobuf_set_options", 00:05:38.346 "params": { 00:05:38.346 "small_pool_count": 8192, 00:05:38.346 "large_pool_count": 1024, 00:05:38.346 "small_bufsize": 8192, 00:05:38.346 "large_bufsize": 135168 00:05:38.346 } 00:05:38.346 } 00:05:38.346 ] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "sock", 00:05:38.346 "config": [ 00:05:38.346 { 00:05:38.346 "method": "sock_set_default_impl", 00:05:38.346 "params": { 00:05:38.346 "impl_name": "posix" 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "sock_impl_set_options", 00:05:38.346 "params": { 00:05:38.346 "impl_name": "ssl", 00:05:38.346 "recv_buf_size": 4096, 00:05:38.346 "send_buf_size": 4096, 00:05:38.346 "enable_recv_pipe": true, 00:05:38.346 "enable_quickack": false, 00:05:38.346 "enable_placement_id": 0, 00:05:38.346 "enable_zerocopy_send_server": true, 00:05:38.346 "enable_zerocopy_send_client": false, 00:05:38.346 "zerocopy_threshold": 0, 00:05:38.346 "tls_version": 0, 00:05:38.346 "enable_ktls": false 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "sock_impl_set_options", 00:05:38.346 "params": { 00:05:38.346 "impl_name": "posix", 00:05:38.346 "recv_buf_size": 2097152, 00:05:38.346 "send_buf_size": 2097152, 00:05:38.346 "enable_recv_pipe": true, 00:05:38.346 "enable_quickack": false, 00:05:38.346 "enable_placement_id": 0, 00:05:38.346 "enable_zerocopy_send_server": true, 00:05:38.346 "enable_zerocopy_send_client": false, 00:05:38.346 "zerocopy_threshold": 0, 00:05:38.346 "tls_version": 0, 00:05:38.346 "enable_ktls": false 00:05:38.346 } 00:05:38.346 } 00:05:38.346 ] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "vmd", 00:05:38.346 "config": [] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "accel", 00:05:38.346 "config": [ 00:05:38.346 { 00:05:38.346 "method": "accel_set_options", 00:05:38.346 "params": { 00:05:38.346 "small_cache_size": 128, 00:05:38.346 "large_cache_size": 16, 00:05:38.346 "task_count": 2048, 00:05:38.346 "sequence_count": 2048, 00:05:38.346 "buf_count": 2048 00:05:38.346 } 00:05:38.346 } 00:05:38.346 ] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "bdev", 00:05:38.346 "config": [ 00:05:38.346 { 00:05:38.346 "method": "bdev_set_options", 00:05:38.346 "params": { 00:05:38.346 "bdev_io_pool_size": 65535, 00:05:38.346 "bdev_io_cache_size": 256, 00:05:38.346 "bdev_auto_examine": true, 00:05:38.346 "iobuf_small_cache_size": 128, 00:05:38.346 "iobuf_large_cache_size": 16 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "bdev_raid_set_options", 00:05:38.346 "params": { 00:05:38.346 "process_window_size_kb": 1024 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "bdev_iscsi_set_options", 00:05:38.346 "params": { 00:05:38.346 "timeout_sec": 30 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "bdev_nvme_set_options", 00:05:38.346 "params": { 00:05:38.346 "action_on_timeout": "none", 00:05:38.346 "timeout_us": 0, 00:05:38.346 "timeout_admin_us": 0, 00:05:38.346 "keep_alive_timeout_ms": 10000, 00:05:38.346 "arbitration_burst": 0, 00:05:38.346 "low_priority_weight": 0, 00:05:38.346 "medium_priority_weight": 0, 00:05:38.346 "high_priority_weight": 0, 00:05:38.346 "nvme_adminq_poll_period_us": 10000, 00:05:38.346 "nvme_ioq_poll_period_us": 0, 00:05:38.346 "io_queue_requests": 0, 00:05:38.346 "delay_cmd_submit": true, 00:05:38.346 "transport_retry_count": 4, 00:05:38.346 "bdev_retry_count": 3, 00:05:38.346 "transport_ack_timeout": 0, 00:05:38.346 "ctrlr_loss_timeout_sec": 0, 00:05:38.346 "reconnect_delay_sec": 0, 00:05:38.346 "fast_io_fail_timeout_sec": 0, 00:05:38.346 "disable_auto_failback": false, 00:05:38.346 "generate_uuids": false, 00:05:38.346 "transport_tos": 0, 00:05:38.346 "nvme_error_stat": false, 00:05:38.346 "rdma_srq_size": 0, 00:05:38.346 "io_path_stat": false, 00:05:38.346 "allow_accel_sequence": false, 00:05:38.346 "rdma_max_cq_size": 0, 00:05:38.346 "rdma_cm_event_timeout_ms": 0, 00:05:38.346 "dhchap_digests": [ 00:05:38.346 "sha256", 00:05:38.346 "sha384", 00:05:38.346 "sha512" 00:05:38.346 ], 00:05:38.346 "dhchap_dhgroups": [ 00:05:38.346 "null", 00:05:38.346 "ffdhe2048", 00:05:38.346 "ffdhe3072", 00:05:38.346 "ffdhe4096", 00:05:38.346 "ffdhe6144", 00:05:38.346 "ffdhe8192" 00:05:38.346 ] 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "bdev_nvme_set_hotplug", 00:05:38.346 "params": { 00:05:38.346 "period_us": 100000, 00:05:38.346 "enable": false 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "bdev_wait_for_examine" 00:05:38.346 } 00:05:38.346 ] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "scsi", 00:05:38.346 "config": null 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "scheduler", 00:05:38.346 "config": [ 00:05:38.346 { 00:05:38.346 "method": "framework_set_scheduler", 00:05:38.346 "params": { 00:05:38.346 "name": "static" 00:05:38.346 } 00:05:38.346 } 00:05:38.346 ] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "vhost_scsi", 00:05:38.346 "config": [] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "vhost_blk", 00:05:38.346 "config": [] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "ublk", 00:05:38.346 "config": [] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "nbd", 00:05:38.346 "config": [] 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "subsystem": "nvmf", 00:05:38.346 "config": [ 00:05:38.346 { 00:05:38.346 "method": "nvmf_set_config", 00:05:38.346 "params": { 00:05:38.346 "discovery_filter": "match_any", 00:05:38.346 "admin_cmd_passthru": { 00:05:38.346 "identify_ctrlr": false 00:05:38.346 } 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "nvmf_set_max_subsystems", 00:05:38.346 "params": { 00:05:38.346 "max_subsystems": 1024 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "nvmf_set_crdt", 00:05:38.346 "params": { 00:05:38.346 "crdt1": 0, 00:05:38.346 "crdt2": 0, 00:05:38.346 "crdt3": 0 00:05:38.346 } 00:05:38.346 }, 00:05:38.346 { 00:05:38.346 "method": "nvmf_create_transport", 00:05:38.346 "params": { 00:05:38.346 "trtype": "TCP", 00:05:38.346 "max_queue_depth": 128, 00:05:38.346 "max_io_qpairs_per_ctrlr": 127, 00:05:38.346 "in_capsule_data_size": 4096, 00:05:38.346 "max_io_size": 131072, 00:05:38.346 "io_unit_size": 131072, 00:05:38.346 "max_aq_depth": 128, 00:05:38.346 "num_shared_buffers": 511, 00:05:38.346 "buf_cache_size": 4294967295, 00:05:38.346 "dif_insert_or_strip": false, 00:05:38.346 "zcopy": false, 00:05:38.346 "c2h_success": true, 00:05:38.346 "sock_priority": 0, 00:05:38.346 "abort_timeout_sec": 1, 00:05:38.346 "ack_timeout": 0, 00:05:38.346 "data_wr_pool_size": 0 00:05:38.346 } 00:05:38.346 } 00:05:38.346 ] 00:05:38.346 }, 00:05:38.346 { 00:05:38.347 "subsystem": "iscsi", 00:05:38.347 "config": [ 00:05:38.347 { 00:05:38.347 "method": "iscsi_set_options", 00:05:38.347 "params": { 00:05:38.347 "node_base": "iqn.2016-06.io.spdk", 00:05:38.347 "max_sessions": 128, 00:05:38.347 "max_connections_per_session": 2, 00:05:38.347 "max_queue_depth": 64, 00:05:38.347 "default_time2wait": 2, 00:05:38.347 "default_time2retain": 20, 00:05:38.347 "first_burst_length": 8192, 00:05:38.347 "immediate_data": true, 00:05:38.347 "allow_duplicated_isid": false, 00:05:38.347 "error_recovery_level": 0, 00:05:38.347 "nop_timeout": 60, 00:05:38.347 "nop_in_interval": 30, 00:05:38.347 "disable_chap": false, 00:05:38.347 "require_chap": false, 00:05:38.347 "mutual_chap": false, 00:05:38.347 "chap_group": 0, 00:05:38.347 "max_large_datain_per_connection": 64, 00:05:38.347 "max_r2t_per_connection": 4, 00:05:38.347 "pdu_pool_size": 36864, 00:05:38.347 "immediate_data_pool_size": 16384, 00:05:38.347 "data_out_pool_size": 2048 00:05:38.347 } 00:05:38.347 } 00:05:38.347 ] 00:05:38.347 } 00:05:38.347 ] 00:05:38.347 } 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1698020 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1698020 ']' 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1698020 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1698020 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1698020' 00:05:38.347 killing process with pid 1698020 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1698020 00:05:38.347 10:13:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1698020 00:05:38.914 10:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1698294 00:05:38.914 10:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:38.914 10:13:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1698294 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1698294 ']' 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1698294 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1698294 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1698294' 00:05:44.186 killing process with pid 1698294 00:05:44.186 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1698294 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1698294 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:05:44.187 00:05:44.187 real 0m6.721s 00:05:44.187 user 0m6.417s 00:05:44.187 sys 0m0.686s 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.187 ************************************ 00:05:44.187 END TEST skip_rpc_with_json 00:05:44.187 ************************************ 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:44.187 10:13:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.187 ************************************ 00:05:44.187 START TEST skip_rpc_with_delay 00:05:44.187 ************************************ 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.187 [2024-07-15 10:13:08.911468] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:44.187 [2024-07-15 10:13:08.911540] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.187 00:05:44.187 real 0m0.080s 00:05:44.187 user 0m0.044s 00:05:44.187 sys 0m0.036s 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.187 10:13:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:44.187 ************************************ 00:05:44.187 END TEST skip_rpc_with_delay 00:05:44.187 ************************************ 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:44.187 10:13:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:44.187 10:13:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:44.187 10:13:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.187 10:13:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.446 ************************************ 00:05:44.446 START TEST exit_on_failed_rpc_init 00:05:44.446 ************************************ 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1699160 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1699160 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1699160 ']' 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.446 10:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.446 [2024-07-15 10:13:09.062211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:44.446 [2024-07-15 10:13:09.062259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699160 ] 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:44.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:44.446 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:44.446 [2024-07-15 10:13:09.154340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.446 [2024-07-15 10:13:09.227065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:45.382 10:13:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.382 [2024-07-15 10:13:09.892634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:45.382 [2024-07-15 10:13:09.892684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699409 ] 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:45.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:45.382 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:45.382 [2024-07-15 10:13:09.981689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.382 [2024-07-15 10:13:10.061412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.382 [2024-07-15 10:13:10.061479] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:45.382 [2024-07-15 10:13:10.061490] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:45.382 [2024-07-15 10:13:10.061498] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1699160 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1699160 ']' 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1699160 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.382 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1699160 00:05:45.686 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.686 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.686 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1699160' 00:05:45.686 killing process with pid 1699160 00:05:45.686 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1699160 00:05:45.686 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1699160 00:05:45.945 00:05:45.945 real 0m1.497s 00:05:45.945 user 0m1.654s 00:05:45.945 sys 0m0.483s 00:05:45.945 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.945 10:13:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.945 ************************************ 00:05:45.945 END TEST exit_on_failed_rpc_init 00:05:45.945 ************************************ 00:05:45.945 10:13:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.945 10:13:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:05:45.945 00:05:45.945 real 0m14.090s 00:05:45.945 user 0m13.393s 00:05:45.945 sys 0m1.786s 00:05:45.945 10:13:10 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.945 10:13:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.945 ************************************ 00:05:45.945 END TEST skip_rpc 00:05:45.945 ************************************ 00:05:45.945 10:13:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.945 10:13:10 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.945 10:13:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.945 10:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.945 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:45.945 ************************************ 00:05:45.945 START TEST rpc_client 00:05:45.945 ************************************ 00:05:45.945 10:13:10 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.945 * Looking for test storage... 00:05:46.204 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:46.204 10:13:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:46.204 OK 00:05:46.204 10:13:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.204 00:05:46.204 real 0m0.143s 00:05:46.204 user 0m0.062s 00:05:46.204 sys 0m0.093s 00:05:46.204 10:13:10 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.204 10:13:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.204 ************************************ 00:05:46.204 END TEST rpc_client 00:05:46.204 ************************************ 00:05:46.204 10:13:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.204 10:13:10 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.204 10:13:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.204 10:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.204 10:13:10 -- common/autotest_common.sh@10 -- # set +x 00:05:46.204 ************************************ 00:05:46.204 START TEST json_config 00:05:46.204 ************************************ 00:05:46.204 10:13:10 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.204 10:13:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:46.204 10:13:10 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.204 10:13:10 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.204 10:13:10 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.204 10:13:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.204 10:13:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.204 10:13:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.204 10:13:10 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.204 10:13:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@47 -- # : 0 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.204 10:13:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.205 10:13:10 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.205 10:13:10 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.205 10:13:10 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:46.205 INFO: JSON configuration test init 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.205 10:13:10 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:46.205 10:13:10 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.205 10:13:10 json_config -- json_config/common.sh@10 -- # shift 00:05:46.205 10:13:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.205 10:13:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.205 10:13:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.205 10:13:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.205 10:13:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.205 10:13:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1699775 00:05:46.205 10:13:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.205 Waiting for target to run... 00:05:46.205 10:13:10 json_config -- json_config/common.sh@25 -- # waitforlisten 1699775 /var/tmp/spdk_tgt.sock 00:05:46.205 10:13:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@829 -- # '[' -z 1699775 ']' 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.205 10:13:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.464 [2024-07-15 10:13:11.040416] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:46.464 [2024-07-15 10:13:11.040467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699775 ] 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.0 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.1 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.2 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.3 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.4 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.5 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.6 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:01.7 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.0 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.1 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.2 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.3 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.4 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.5 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.6 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3d:02.7 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.0 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.1 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.2 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.3 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.4 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.5 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.6 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:01.7 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.0 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.1 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.2 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.3 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.4 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.5 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.6 cannot be used 00:05:46.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:05:46.723 EAL: Requested device 0000:3f:02.7 cannot be used 00:05:46.723 [2024-07-15 10:13:11.510138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.982 [2024-07-15 10:13:11.600359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.256 10:13:11 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.256 10:13:11 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:47.256 10:13:11 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.256 00:05:47.256 10:13:11 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:47.256 10:13:11 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:47.256 10:13:11 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.256 10:13:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.256 10:13:11 json_config -- json_config/json_config.sh@95 -- # [[ 1 -eq 1 ]] 00:05:47.256 10:13:11 json_config -- json_config/json_config.sh@96 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:47.256 10:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:47.256 10:13:11 json_config -- json_config/json_config.sh@97 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:47.256 10:13:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:47.527 [2024-07-15 10:13:12.150037] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:47.527 10:13:12 json_config -- json_config/json_config.sh@98 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:47.527 10:13:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:47.785 [2024-07-15 10:13:12.330496] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:47.785 10:13:12 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:47.785 10:13:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.785 10:13:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.785 10:13:12 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.785 10:13:12 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:47.785 10:13:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.785 [2024-07-15 10:13:12.570085] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:53.054 10:13:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.054 10:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:53.054 10:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:53.054 10:13:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.054 10:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:05:53.054 10:13:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.054 10:13:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:05:53.054 10:13:17 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:53.054 10:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:53.312 10:13:17 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:05:53.312 10:13:17 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:05:53.312 10:13:17 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:05:53.313 10:13:17 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:05:53.313 10:13:17 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:05:53.313 10:13:17 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:53.313 10:13:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:53.571 Nvme0n1p0 Nvme0n1p1 00:05:53.571 10:13:18 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:53.571 10:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:53.571 [2024-07-15 10:13:18.306438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:53.571 [2024-07-15 10:13:18.306481] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:53.571 00:05:53.571 10:13:18 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:53.571 10:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:53.829 Malloc3 00:05:53.829 10:13:18 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:53.829 10:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:54.087 [2024-07-15 10:13:18.643346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.087 [2024-07-15 10:13:18.643384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.087 [2024-07-15 10:13:18.643417] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26d3d70 00:05:54.087 [2024-07-15 10:13:18.643425] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.087 [2024-07-15 10:13:18.644535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.087 [2024-07-15 10:13:18.644559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:54.087 PTBdevFromMalloc3 00:05:54.087 10:13:18 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:54.087 10:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:54.087 Null0 00:05:54.087 10:13:18 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:54.087 10:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:54.345 Malloc0 00:05:54.345 10:13:18 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:54.346 10:13:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:54.604 Malloc1 00:05:54.604 10:13:19 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:54.604 10:13:19 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:54.604 102400+0 records in 00:05:54.604 102400+0 records out 00:05:54.604 104857600 bytes (105 MB, 100 MiB) copied, 0.192954 s, 543 MB/s 00:05:54.604 10:13:19 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:54.604 10:13:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:05:54.863 aio_disk 00:05:54.863 10:13:19 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:05:54.863 10:13:19 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:54.863 10:13:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:59.048 a0398c5e-b7cb-427e-8b66-79817fcff6b2 00:05:59.048 10:13:23 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:05:59.048 10:13:23 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:05:59.048 10:13:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:05:59.048 10:13:23 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:05:59.048 10:13:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:05:59.305 10:13:23 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:59.305 10:13:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:59.563 10:13:24 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:59.563 10:13:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:59.563 10:13:24 json_config -- json_config/json_config.sh@157 -- # [[ 1 -eq 1 ]] 00:05:59.563 10:13:24 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:05:59.563 10:13:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:05:59.821 MallocForCryptoBdev 00:05:59.821 10:13:24 json_config -- json_config/json_config.sh@159 -- # lspci -d:37c8 00:05:59.821 10:13:24 json_config -- json_config/json_config.sh@159 -- # wc -l 00:05:59.821 10:13:24 json_config -- json_config/json_config.sh@159 -- # [[ 5 -eq 0 ]] 00:05:59.821 10:13:24 json_config -- json_config/json_config.sh@162 -- # local crypto_driver=crypto_qat 00:05:59.821 10:13:24 json_config -- json_config/json_config.sh@165 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:05:59.821 10:13:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:06:00.079 [2024-07-15 10:13:24.639918] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:06:00.079 CryptoMallocBdev 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@169 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:3d9ae5af-c05d-44c9-9376-20c6581f2cc8 bdev_register:29e36ef1-c6b4-4fda-bf2d-7fab393358eb bdev_register:789ad810-da85-457a-9fbe-91067506636f bdev_register:84e35b9d-4085-49db-a7d5-79311a207773 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:3d9ae5af-c05d-44c9-9376-20c6581f2cc8 bdev_register:29e36ef1-c6b4-4fda-bf2d-7fab393358eb bdev_register:789ad810-da85-457a-9fbe-91067506636f bdev_register:84e35b9d-4085-49db-a7d5-79311a207773 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@71 -- # sort 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@72 -- # sort 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:00.079 10:13:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:06:00.079 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:3d9ae5af-c05d-44c9-9376-20c6581f2cc8 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:29e36ef1-c6b4-4fda-bf2d-7fab393358eb 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:789ad810-da85-457a-9fbe-91067506636f 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:84e35b9d-4085-49db-a7d5-79311a207773 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:MallocForCryptoBdev 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:CryptoMallocBdev 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:29e36ef1-c6b4-4fda-bf2d-7fab393358eb bdev_register:3d9ae5af-c05d-44c9-9376-20c6581f2cc8 bdev_register:789ad810-da85-457a-9fbe-91067506636f bdev_register:84e35b9d-4085-49db-a7d5-79311a207773 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\9\e\3\6\e\f\1\-\c\6\b\4\-\4\f\d\a\-\b\f\2\d\-\7\f\a\b\3\9\3\3\5\8\e\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\d\9\a\e\5\a\f\-\c\0\5\d\-\4\4\c\9\-\9\3\7\6\-\2\0\c\6\5\8\1\f\2\c\c\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\8\9\a\d\8\1\0\-\d\a\8\5\-\4\5\7\a\-\9\f\b\e\-\9\1\0\6\7\5\0\6\6\3\6\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\4\e\3\5\b\9\d\-\4\0\8\5\-\4\9\d\b\-\a\7\d\5\-\7\9\3\1\1\a\2\0\7\7\7\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@86 -- # cat 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:29e36ef1-c6b4-4fda-bf2d-7fab393358eb bdev_register:3d9ae5af-c05d-44c9-9376-20c6581f2cc8 bdev_register:789ad810-da85-457a-9fbe-91067506636f bdev_register:84e35b9d-4085-49db-a7d5-79311a207773 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:06:00.080 Expected events matched: 00:06:00.080 bdev_register:29e36ef1-c6b4-4fda-bf2d-7fab393358eb 00:06:00.080 bdev_register:3d9ae5af-c05d-44c9-9376-20c6581f2cc8 00:06:00.080 bdev_register:789ad810-da85-457a-9fbe-91067506636f 00:06:00.080 bdev_register:84e35b9d-4085-49db-a7d5-79311a207773 00:06:00.080 bdev_register:aio_disk 00:06:00.080 bdev_register:CryptoMallocBdev 00:06:00.080 bdev_register:Malloc0 00:06:00.080 bdev_register:Malloc0p0 00:06:00.080 bdev_register:Malloc0p1 00:06:00.080 bdev_register:Malloc0p2 00:06:00.080 bdev_register:Malloc1 00:06:00.080 bdev_register:Malloc3 00:06:00.080 bdev_register:MallocForCryptoBdev 00:06:00.080 bdev_register:Null0 00:06:00.080 bdev_register:Nvme0n1 00:06:00.080 bdev_register:Nvme0n1p0 00:06:00.080 bdev_register:Nvme0n1p1 00:06:00.080 bdev_register:PTBdevFromMalloc3 00:06:00.080 10:13:24 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:06:00.080 10:13:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.080 10:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.338 10:13:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:00.338 10:13:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:00.338 10:13:24 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:00.338 10:13:24 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:00.338 10:13:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.338 10:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.338 10:13:24 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:00.338 10:13:24 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.338 10:13:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:00.338 MallocBdevForConfigChangeCheck 00:06:00.338 10:13:25 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:00.338 10:13:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.338 10:13:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.596 10:13:25 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:00.596 10:13:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.854 10:13:25 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:00.854 INFO: shutting down applications... 00:06:00.854 10:13:25 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:00.854 10:13:25 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:00.854 10:13:25 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:00.854 10:13:25 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:00.854 [2024-07-15 10:13:25.614710] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:03.386 Calling clear_iscsi_subsystem 00:06:03.386 Calling clear_nvmf_subsystem 00:06:03.387 Calling clear_nbd_subsystem 00:06:03.387 Calling clear_ublk_subsystem 00:06:03.387 Calling clear_vhost_blk_subsystem 00:06:03.387 Calling clear_vhost_scsi_subsystem 00:06:03.387 Calling clear_bdev_subsystem 00:06:03.387 10:13:28 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:06:03.387 10:13:28 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:03.387 10:13:28 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:03.387 10:13:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:03.387 10:13:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.387 10:13:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:03.645 10:13:28 json_config -- json_config/json_config.sh@345 -- # break 00:06:03.645 10:13:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:03.645 10:13:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:03.645 10:13:28 json_config -- json_config/common.sh@31 -- # local app=target 00:06:03.645 10:13:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.645 10:13:28 json_config -- json_config/common.sh@35 -- # [[ -n 1699775 ]] 00:06:03.645 10:13:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1699775 00:06:03.645 10:13:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.645 10:13:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.645 10:13:28 json_config -- json_config/common.sh@41 -- # kill -0 1699775 00:06:03.645 10:13:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.213 10:13:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.213 10:13:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.213 10:13:28 json_config -- json_config/common.sh@41 -- # kill -0 1699775 00:06:04.213 10:13:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.213 10:13:28 json_config -- json_config/common.sh@43 -- # break 00:06:04.213 10:13:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.213 10:13:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.213 SPDK target shutdown done 00:06:04.213 10:13:28 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:04.213 INFO: relaunching applications... 00:06:04.213 10:13:28 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.213 10:13:28 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.213 10:13:28 json_config -- json_config/common.sh@10 -- # shift 00:06:04.213 10:13:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.213 10:13:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.213 10:13:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.213 10:13:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.213 10:13:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.213 10:13:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1702859 00:06:04.213 10:13:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.213 Waiting for target to run... 00:06:04.213 10:13:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.213 10:13:28 json_config -- json_config/common.sh@25 -- # waitforlisten 1702859 /var/tmp/spdk_tgt.sock 00:06:04.213 10:13:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 1702859 ']' 00:06:04.213 10:13:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.213 10:13:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.213 10:13:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.213 10:13:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.213 10:13:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.213 [2024-07-15 10:13:28.952857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:04.213 [2024-07-15 10:13:28.952919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702859 ] 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:04.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:04.780 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:04.780 [2024-07-15 10:13:29.423728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.780 [2024-07-15 10:13:29.506026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.780 [2024-07-15 10:13:29.559473] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:06:04.780 [2024-07-15 10:13:29.567507] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:06:05.037 [2024-07-15 10:13:29.575523] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:06:05.037 [2024-07-15 10:13:29.654897] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:06:07.600 [2024-07-15 10:13:31.769035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:07.600 [2024-07-15 10:13:31.769083] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:07.600 [2024-07-15 10:13:31.769093] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:07.600 [2024-07-15 10:13:31.777053] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:07.600 [2024-07-15 10:13:31.777073] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:07.600 [2024-07-15 10:13:31.785066] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:07.600 [2024-07-15 10:13:31.785083] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:07.600 [2024-07-15 10:13:31.793101] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:06:07.600 [2024-07-15 10:13:31.793124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:06:07.600 [2024-07-15 10:13:31.793133] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:10.125 [2024-07-15 10:13:34.674966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:10.125 [2024-07-15 10:13:34.674999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.125 [2024-07-15 10:13:34.675011] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df9640 00:06:10.125 [2024-07-15 10:13:34.675018] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.125 [2024-07-15 10:13:34.675210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.125 [2024-07-15 10:13:34.675221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:10.125 10:13:34 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.125 10:13:34 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:10.125 10:13:34 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.125 00:06:10.125 10:13:34 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:10.125 10:13:34 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:10.125 INFO: Checking if target configuration is the same... 00:06:10.125 10:13:34 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.125 10:13:34 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:10.125 10:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.125 + '[' 2 -ne 2 ']' 00:06:10.125 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.125 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:10.125 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:10.125 +++ basename /dev/fd/62 00:06:10.125 ++ mktemp /tmp/62.XXX 00:06:10.125 + tmp_file_1=/tmp/62.zMi 00:06:10.125 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.125 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.125 + tmp_file_2=/tmp/spdk_tgt_config.json.mLq 00:06:10.125 + ret=0 00:06:10.125 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.381 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.381 + diff -u /tmp/62.zMi /tmp/spdk_tgt_config.json.mLq 00:06:10.381 + echo 'INFO: JSON config files are the same' 00:06:10.381 INFO: JSON config files are the same 00:06:10.381 + rm /tmp/62.zMi /tmp/spdk_tgt_config.json.mLq 00:06:10.381 + exit 0 00:06:10.381 10:13:35 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:10.381 10:13:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:10.381 INFO: changing configuration and checking if this can be detected... 00:06:10.381 10:13:35 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.381 10:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.637 10:13:35 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.637 10:13:35 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:10.637 10:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.637 + '[' 2 -ne 2 ']' 00:06:10.637 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.637 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:06:10.637 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:06:10.637 +++ basename /dev/fd/62 00:06:10.637 ++ mktemp /tmp/62.XXX 00:06:10.637 + tmp_file_1=/tmp/62.y7e 00:06:10.637 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.637 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.637 + tmp_file_2=/tmp/spdk_tgt_config.json.nLT 00:06:10.637 + ret=0 00:06:10.637 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.903 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.903 + diff -u /tmp/62.y7e /tmp/spdk_tgt_config.json.nLT 00:06:10.903 + ret=1 00:06:10.903 + echo '=== Start of file: /tmp/62.y7e ===' 00:06:10.903 + cat /tmp/62.y7e 00:06:10.903 + echo '=== End of file: /tmp/62.y7e ===' 00:06:10.903 + echo '' 00:06:10.903 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nLT ===' 00:06:10.903 + cat /tmp/spdk_tgt_config.json.nLT 00:06:10.903 + echo '=== End of file: /tmp/spdk_tgt_config.json.nLT ===' 00:06:10.903 + echo '' 00:06:10.903 + rm /tmp/62.y7e /tmp/spdk_tgt_config.json.nLT 00:06:10.903 + exit 1 00:06:10.903 10:13:35 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:10.903 INFO: configuration change detected. 00:06:10.903 10:13:35 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:10.903 10:13:35 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:10.903 10:13:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.903 10:13:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@317 -- # [[ -n 1702859 ]] 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:11.160 10:13:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.160 10:13:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:11.160 10:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:11.160 10:13:35 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:11.160 10:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:11.417 10:13:36 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:11.417 10:13:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:11.674 10:13:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.674 10:13:36 json_config -- json_config/json_config.sh@323 -- # killprocess 1702859 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 1702859 ']' 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@952 -- # kill -0 1702859 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@953 -- # uname 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.674 10:13:36 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1702859 00:06:11.932 10:13:36 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.932 10:13:36 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.932 10:13:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1702859' 00:06:11.932 killing process with pid 1702859 00:06:11.932 10:13:36 json_config -- common/autotest_common.sh@967 -- # kill 1702859 00:06:11.932 10:13:36 json_config -- common/autotest_common.sh@972 -- # wait 1702859 00:06:14.460 10:13:38 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.460 10:13:38 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:14.460 10:13:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.460 10:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.460 10:13:38 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:14.460 10:13:38 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:14.460 INFO: Success 00:06:14.460 00:06:14.460 real 0m28.135s 00:06:14.460 user 0m30.795s 00:06:14.460 sys 0m3.395s 00:06:14.460 10:13:38 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.460 10:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.460 ************************************ 00:06:14.460 END TEST json_config 00:06:14.460 ************************************ 00:06:14.460 10:13:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.460 10:13:39 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.460 10:13:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.460 10:13:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.460 10:13:39 -- common/autotest_common.sh@10 -- # set +x 00:06:14.460 ************************************ 00:06:14.460 START TEST json_config_extra_key 00:06:14.460 ************************************ 00:06:14.460 10:13:39 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:06:14.460 10:13:39 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.460 10:13:39 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.460 10:13:39 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.460 10:13:39 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.460 10:13:39 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.460 10:13:39 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.460 10:13:39 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:14.460 10:13:39 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.460 10:13:39 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:14.460 INFO: launching applications... 00:06:14.460 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.460 10:13:39 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:14.460 10:13:39 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1704821 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:14.461 Waiting for target to run... 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1704821 /var/tmp/spdk_tgt.sock 00:06:14.461 10:13:39 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1704821 ']' 00:06:14.461 10:13:39 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:14.461 10:13:39 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.461 10:13:39 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:14.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:14.461 10:13:39 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.461 10:13:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:14.461 10:13:39 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:06:14.461 [2024-07-15 10:13:39.198863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.461 [2024-07-15 10:13:39.198939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704821 ] 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.719 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:14.719 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:14.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:14.720 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:14.977 [2024-07-15 10:13:39.514715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.977 [2024-07-15 10:13:39.579182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.234 10:13:39 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.234 10:13:39 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.234 00:06:15.234 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.234 INFO: shutting down applications... 00:06:15.234 10:13:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1704821 ]] 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1704821 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1704821 00:06:15.234 10:13:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1704821 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.801 10:13:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.801 SPDK target shutdown done 00:06:15.801 10:13:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.801 Success 00:06:15.801 00:06:15.801 real 0m1.400s 00:06:15.801 user 0m0.945s 00:06:15.801 sys 0m0.425s 00:06:15.801 10:13:40 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.801 10:13:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.801 ************************************ 00:06:15.801 END TEST json_config_extra_key 00:06:15.801 ************************************ 00:06:15.801 10:13:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:15.801 10:13:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.801 10:13:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.801 10:13:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.801 10:13:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.801 ************************************ 00:06:15.801 START TEST alias_rpc 00:06:15.801 ************************************ 00:06:15.801 10:13:40 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.059 * Looking for test storage... 00:06:16.059 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:06:16.059 10:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.059 10:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1705135 00:06:16.059 10:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1705135 00:06:16.059 10:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.059 10:13:40 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1705135 ']' 00:06:16.059 10:13:40 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.059 10:13:40 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.059 10:13:40 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.059 10:13:40 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.059 10:13:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.059 [2024-07-15 10:13:40.721967] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:16.059 [2024-07-15 10:13:40.722023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705135 ] 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:16.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.059 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:16.059 [2024-07-15 10:13:40.814160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.317 [2024-07-15 10:13:40.888461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.882 10:13:41 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.882 10:13:41 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:16.882 10:13:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:17.140 10:13:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1705135 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1705135 ']' 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1705135 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1705135 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1705135' 00:06:17.140 killing process with pid 1705135 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@967 -- # kill 1705135 00:06:17.140 10:13:41 alias_rpc -- common/autotest_common.sh@972 -- # wait 1705135 00:06:17.398 00:06:17.398 real 0m1.532s 00:06:17.398 user 0m1.600s 00:06:17.398 sys 0m0.485s 00:06:17.398 10:13:42 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.398 10:13:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.398 ************************************ 00:06:17.398 END TEST alias_rpc 00:06:17.398 ************************************ 00:06:17.398 10:13:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.398 10:13:42 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:17.398 10:13:42 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.398 10:13:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.398 10:13:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.398 10:13:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.398 ************************************ 00:06:17.398 START TEST spdkcli_tcp 00:06:17.398 ************************************ 00:06:17.398 10:13:42 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:17.657 * Looking for test storage... 00:06:17.657 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1705458 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1705458 00:06:17.657 10:13:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1705458 ']' 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.657 10:13:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.657 [2024-07-15 10:13:42.346029] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:17.657 [2024-07-15 10:13:42.346077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705458 ] 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:17.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:17.657 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:17.657 [2024-07-15 10:13:42.438117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.916 [2024-07-15 10:13:42.511783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.916 [2024-07-15 10:13:42.511785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.484 10:13:43 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.484 10:13:43 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:18.484 10:13:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.484 10:13:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1705635 00:06:18.484 10:13:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.743 [ 00:06:18.743 "bdev_malloc_delete", 00:06:18.743 "bdev_malloc_create", 00:06:18.743 "bdev_null_resize", 00:06:18.743 "bdev_null_delete", 00:06:18.743 "bdev_null_create", 00:06:18.743 "bdev_nvme_cuse_unregister", 00:06:18.743 "bdev_nvme_cuse_register", 00:06:18.743 "bdev_opal_new_user", 00:06:18.743 "bdev_opal_set_lock_state", 00:06:18.743 "bdev_opal_delete", 00:06:18.743 "bdev_opal_get_info", 00:06:18.743 "bdev_opal_create", 00:06:18.743 "bdev_nvme_opal_revert", 00:06:18.743 "bdev_nvme_opal_init", 00:06:18.743 "bdev_nvme_send_cmd", 00:06:18.743 "bdev_nvme_get_path_iostat", 00:06:18.743 "bdev_nvme_get_mdns_discovery_info", 00:06:18.743 "bdev_nvme_stop_mdns_discovery", 00:06:18.743 "bdev_nvme_start_mdns_discovery", 00:06:18.743 "bdev_nvme_set_multipath_policy", 00:06:18.743 "bdev_nvme_set_preferred_path", 00:06:18.743 "bdev_nvme_get_io_paths", 00:06:18.743 "bdev_nvme_remove_error_injection", 00:06:18.743 "bdev_nvme_add_error_injection", 00:06:18.743 "bdev_nvme_get_discovery_info", 00:06:18.743 "bdev_nvme_stop_discovery", 00:06:18.743 "bdev_nvme_start_discovery", 00:06:18.743 "bdev_nvme_get_controller_health_info", 00:06:18.743 "bdev_nvme_disable_controller", 00:06:18.743 "bdev_nvme_enable_controller", 00:06:18.743 "bdev_nvme_reset_controller", 00:06:18.743 "bdev_nvme_get_transport_statistics", 00:06:18.743 "bdev_nvme_apply_firmware", 00:06:18.743 "bdev_nvme_detach_controller", 00:06:18.743 "bdev_nvme_get_controllers", 00:06:18.743 "bdev_nvme_attach_controller", 00:06:18.743 "bdev_nvme_set_hotplug", 00:06:18.743 "bdev_nvme_set_options", 00:06:18.743 "bdev_passthru_delete", 00:06:18.743 "bdev_passthru_create", 00:06:18.743 "bdev_lvol_set_parent_bdev", 00:06:18.743 "bdev_lvol_set_parent", 00:06:18.743 "bdev_lvol_check_shallow_copy", 00:06:18.743 "bdev_lvol_start_shallow_copy", 00:06:18.743 "bdev_lvol_grow_lvstore", 00:06:18.743 "bdev_lvol_get_lvols", 00:06:18.743 "bdev_lvol_get_lvstores", 00:06:18.743 "bdev_lvol_delete", 00:06:18.743 "bdev_lvol_set_read_only", 00:06:18.743 "bdev_lvol_resize", 00:06:18.743 "bdev_lvol_decouple_parent", 00:06:18.743 "bdev_lvol_inflate", 00:06:18.743 "bdev_lvol_rename", 00:06:18.743 "bdev_lvol_clone_bdev", 00:06:18.743 "bdev_lvol_clone", 00:06:18.743 "bdev_lvol_snapshot", 00:06:18.743 "bdev_lvol_create", 00:06:18.743 "bdev_lvol_delete_lvstore", 00:06:18.743 "bdev_lvol_rename_lvstore", 00:06:18.743 "bdev_lvol_create_lvstore", 00:06:18.743 "bdev_raid_set_options", 00:06:18.743 "bdev_raid_remove_base_bdev", 00:06:18.743 "bdev_raid_add_base_bdev", 00:06:18.743 "bdev_raid_delete", 00:06:18.743 "bdev_raid_create", 00:06:18.743 "bdev_raid_get_bdevs", 00:06:18.743 "bdev_error_inject_error", 00:06:18.743 "bdev_error_delete", 00:06:18.743 "bdev_error_create", 00:06:18.743 "bdev_split_delete", 00:06:18.743 "bdev_split_create", 00:06:18.743 "bdev_delay_delete", 00:06:18.743 "bdev_delay_create", 00:06:18.743 "bdev_delay_update_latency", 00:06:18.743 "bdev_zone_block_delete", 00:06:18.743 "bdev_zone_block_create", 00:06:18.743 "blobfs_create", 00:06:18.743 "blobfs_detect", 00:06:18.743 "blobfs_set_cache_size", 00:06:18.743 "bdev_crypto_delete", 00:06:18.743 "bdev_crypto_create", 00:06:18.743 "bdev_compress_delete", 00:06:18.743 "bdev_compress_create", 00:06:18.743 "bdev_compress_get_orphans", 00:06:18.743 "bdev_aio_delete", 00:06:18.743 "bdev_aio_rescan", 00:06:18.743 "bdev_aio_create", 00:06:18.743 "bdev_ftl_set_property", 00:06:18.743 "bdev_ftl_get_properties", 00:06:18.743 "bdev_ftl_get_stats", 00:06:18.743 "bdev_ftl_unmap", 00:06:18.743 "bdev_ftl_unload", 00:06:18.743 "bdev_ftl_delete", 00:06:18.743 "bdev_ftl_load", 00:06:18.743 "bdev_ftl_create", 00:06:18.743 "bdev_virtio_attach_controller", 00:06:18.743 "bdev_virtio_scsi_get_devices", 00:06:18.743 "bdev_virtio_detach_controller", 00:06:18.743 "bdev_virtio_blk_set_hotplug", 00:06:18.743 "bdev_iscsi_delete", 00:06:18.743 "bdev_iscsi_create", 00:06:18.743 "bdev_iscsi_set_options", 00:06:18.743 "accel_error_inject_error", 00:06:18.743 "ioat_scan_accel_module", 00:06:18.743 "dsa_scan_accel_module", 00:06:18.743 "iaa_scan_accel_module", 00:06:18.743 "dpdk_cryptodev_get_driver", 00:06:18.743 "dpdk_cryptodev_set_driver", 00:06:18.743 "dpdk_cryptodev_scan_accel_module", 00:06:18.743 "compressdev_scan_accel_module", 00:06:18.743 "keyring_file_remove_key", 00:06:18.743 "keyring_file_add_key", 00:06:18.743 "keyring_linux_set_options", 00:06:18.743 "iscsi_get_histogram", 00:06:18.743 "iscsi_enable_histogram", 00:06:18.743 "iscsi_set_options", 00:06:18.743 "iscsi_get_auth_groups", 00:06:18.743 "iscsi_auth_group_remove_secret", 00:06:18.743 "iscsi_auth_group_add_secret", 00:06:18.743 "iscsi_delete_auth_group", 00:06:18.743 "iscsi_create_auth_group", 00:06:18.743 "iscsi_set_discovery_auth", 00:06:18.743 "iscsi_get_options", 00:06:18.743 "iscsi_target_node_request_logout", 00:06:18.743 "iscsi_target_node_set_redirect", 00:06:18.743 "iscsi_target_node_set_auth", 00:06:18.743 "iscsi_target_node_add_lun", 00:06:18.743 "iscsi_get_stats", 00:06:18.743 "iscsi_get_connections", 00:06:18.743 "iscsi_portal_group_set_auth", 00:06:18.743 "iscsi_start_portal_group", 00:06:18.743 "iscsi_delete_portal_group", 00:06:18.743 "iscsi_create_portal_group", 00:06:18.743 "iscsi_get_portal_groups", 00:06:18.743 "iscsi_delete_target_node", 00:06:18.743 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.743 "iscsi_target_node_add_pg_ig_maps", 00:06:18.743 "iscsi_create_target_node", 00:06:18.743 "iscsi_get_target_nodes", 00:06:18.743 "iscsi_delete_initiator_group", 00:06:18.743 "iscsi_initiator_group_remove_initiators", 00:06:18.743 "iscsi_initiator_group_add_initiators", 00:06:18.743 "iscsi_create_initiator_group", 00:06:18.743 "iscsi_get_initiator_groups", 00:06:18.743 "nvmf_set_crdt", 00:06:18.744 "nvmf_set_config", 00:06:18.744 "nvmf_set_max_subsystems", 00:06:18.744 "nvmf_stop_mdns_prr", 00:06:18.744 "nvmf_publish_mdns_prr", 00:06:18.744 "nvmf_subsystem_get_listeners", 00:06:18.744 "nvmf_subsystem_get_qpairs", 00:06:18.744 "nvmf_subsystem_get_controllers", 00:06:18.744 "nvmf_get_stats", 00:06:18.744 "nvmf_get_transports", 00:06:18.744 "nvmf_create_transport", 00:06:18.744 "nvmf_get_targets", 00:06:18.744 "nvmf_delete_target", 00:06:18.744 "nvmf_create_target", 00:06:18.744 "nvmf_subsystem_allow_any_host", 00:06:18.744 "nvmf_subsystem_remove_host", 00:06:18.744 "nvmf_subsystem_add_host", 00:06:18.744 "nvmf_ns_remove_host", 00:06:18.744 "nvmf_ns_add_host", 00:06:18.744 "nvmf_subsystem_remove_ns", 00:06:18.744 "nvmf_subsystem_add_ns", 00:06:18.744 "nvmf_subsystem_listener_set_ana_state", 00:06:18.744 "nvmf_discovery_get_referrals", 00:06:18.744 "nvmf_discovery_remove_referral", 00:06:18.744 "nvmf_discovery_add_referral", 00:06:18.744 "nvmf_subsystem_remove_listener", 00:06:18.744 "nvmf_subsystem_add_listener", 00:06:18.744 "nvmf_delete_subsystem", 00:06:18.744 "nvmf_create_subsystem", 00:06:18.744 "nvmf_get_subsystems", 00:06:18.744 "env_dpdk_get_mem_stats", 00:06:18.744 "nbd_get_disks", 00:06:18.744 "nbd_stop_disk", 00:06:18.744 "nbd_start_disk", 00:06:18.744 "ublk_recover_disk", 00:06:18.744 "ublk_get_disks", 00:06:18.744 "ublk_stop_disk", 00:06:18.744 "ublk_start_disk", 00:06:18.744 "ublk_destroy_target", 00:06:18.744 "ublk_create_target", 00:06:18.744 "virtio_blk_create_transport", 00:06:18.744 "virtio_blk_get_transports", 00:06:18.744 "vhost_controller_set_coalescing", 00:06:18.744 "vhost_get_controllers", 00:06:18.744 "vhost_delete_controller", 00:06:18.744 "vhost_create_blk_controller", 00:06:18.744 "vhost_scsi_controller_remove_target", 00:06:18.744 "vhost_scsi_controller_add_target", 00:06:18.744 "vhost_start_scsi_controller", 00:06:18.744 "vhost_create_scsi_controller", 00:06:18.744 "thread_set_cpumask", 00:06:18.744 "framework_get_governor", 00:06:18.744 "framework_get_scheduler", 00:06:18.744 "framework_set_scheduler", 00:06:18.744 "framework_get_reactors", 00:06:18.744 "thread_get_io_channels", 00:06:18.744 "thread_get_pollers", 00:06:18.744 "thread_get_stats", 00:06:18.744 "framework_monitor_context_switch", 00:06:18.744 "spdk_kill_instance", 00:06:18.744 "log_enable_timestamps", 00:06:18.744 "log_get_flags", 00:06:18.744 "log_clear_flag", 00:06:18.744 "log_set_flag", 00:06:18.744 "log_get_level", 00:06:18.744 "log_set_level", 00:06:18.744 "log_get_print_level", 00:06:18.744 "log_set_print_level", 00:06:18.744 "framework_enable_cpumask_locks", 00:06:18.744 "framework_disable_cpumask_locks", 00:06:18.744 "framework_wait_init", 00:06:18.744 "framework_start_init", 00:06:18.744 "scsi_get_devices", 00:06:18.744 "bdev_get_histogram", 00:06:18.744 "bdev_enable_histogram", 00:06:18.744 "bdev_set_qos_limit", 00:06:18.744 "bdev_set_qd_sampling_period", 00:06:18.744 "bdev_get_bdevs", 00:06:18.744 "bdev_reset_iostat", 00:06:18.744 "bdev_get_iostat", 00:06:18.744 "bdev_examine", 00:06:18.744 "bdev_wait_for_examine", 00:06:18.744 "bdev_set_options", 00:06:18.744 "notify_get_notifications", 00:06:18.744 "notify_get_types", 00:06:18.744 "accel_get_stats", 00:06:18.744 "accel_set_options", 00:06:18.744 "accel_set_driver", 00:06:18.744 "accel_crypto_key_destroy", 00:06:18.744 "accel_crypto_keys_get", 00:06:18.744 "accel_crypto_key_create", 00:06:18.744 "accel_assign_opc", 00:06:18.744 "accel_get_module_info", 00:06:18.744 "accel_get_opc_assignments", 00:06:18.744 "vmd_rescan", 00:06:18.744 "vmd_remove_device", 00:06:18.744 "vmd_enable", 00:06:18.744 "sock_get_default_impl", 00:06:18.744 "sock_set_default_impl", 00:06:18.744 "sock_impl_set_options", 00:06:18.744 "sock_impl_get_options", 00:06:18.744 "iobuf_get_stats", 00:06:18.744 "iobuf_set_options", 00:06:18.744 "framework_get_pci_devices", 00:06:18.744 "framework_get_config", 00:06:18.744 "framework_get_subsystems", 00:06:18.744 "trace_get_info", 00:06:18.744 "trace_get_tpoint_group_mask", 00:06:18.744 "trace_disable_tpoint_group", 00:06:18.744 "trace_enable_tpoint_group", 00:06:18.744 "trace_clear_tpoint_mask", 00:06:18.744 "trace_set_tpoint_mask", 00:06:18.744 "keyring_get_keys", 00:06:18.744 "spdk_get_version", 00:06:18.744 "rpc_get_methods" 00:06:18.744 ] 00:06:18.744 10:13:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.744 10:13:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.744 10:13:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1705458 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1705458 ']' 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1705458 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1705458 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1705458' 00:06:18.744 killing process with pid 1705458 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1705458 00:06:18.744 10:13:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1705458 00:06:19.002 00:06:19.002 real 0m1.562s 00:06:19.002 user 0m2.806s 00:06:19.002 sys 0m0.498s 00:06:19.002 10:13:43 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.003 10:13:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.003 ************************************ 00:06:19.003 END TEST spdkcli_tcp 00:06:19.003 ************************************ 00:06:19.003 10:13:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.003 10:13:43 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.003 10:13:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.003 10:13:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.003 10:13:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.261 ************************************ 00:06:19.261 START TEST dpdk_mem_utility 00:06:19.261 ************************************ 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:19.261 * Looking for test storage... 00:06:19.261 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:06:19.261 10:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:19.261 10:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1705791 00:06:19.261 10:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1705791 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1705791 ']' 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.261 10:13:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.261 10:13:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.261 [2024-07-15 10:13:43.953553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:19.261 [2024-07-15 10:13:43.953600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705791 ] 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:19.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:19.261 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:19.261 [2024-07-15 10:13:44.045010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.519 [2024-07-15 10:13:44.117918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.091 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.091 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:20.091 10:13:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:20.091 10:13:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:20.091 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.091 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.091 { 00:06:20.091 "filename": "/tmp/spdk_mem_dump.txt" 00:06:20.091 } 00:06:20.091 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.091 10:13:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:20.091 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:20.091 1 heaps totaling size 814.000000 MiB 00:06:20.091 size: 814.000000 MiB heap id: 0 00:06:20.091 end heaps---------- 00:06:20.091 8 mempools totaling size 598.116089 MiB 00:06:20.091 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:20.091 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:20.091 size: 84.521057 MiB name: bdev_io_1705791 00:06:20.091 size: 51.011292 MiB name: evtpool_1705791 00:06:20.091 size: 50.003479 MiB name: msgpool_1705791 00:06:20.091 size: 21.763794 MiB name: PDU_Pool 00:06:20.091 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:20.091 size: 0.026123 MiB name: Session_Pool 00:06:20.091 end mempools------- 00:06:20.091 201 memzones totaling size 4.176453 MiB 00:06:20.091 size: 1.000366 MiB name: RG_ring_0_1705791 00:06:20.091 size: 1.000366 MiB name: RG_ring_1_1705791 00:06:20.091 size: 1.000366 MiB name: RG_ring_4_1705791 00:06:20.091 size: 1.000366 MiB name: RG_ring_5_1705791 00:06:20.091 size: 0.125366 MiB name: RG_ring_2_1705791 00:06:20.091 size: 0.015991 MiB name: RG_ring_3_1705791 00:06:20.091 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:20.091 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:20.092 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:20.092 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_0 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_1 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_2 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_3 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_4 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_5 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_6 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_7 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_8 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_9 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_10 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_11 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_12 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_13 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_14 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_15 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_16 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_17 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_18 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_19 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_20 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_21 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_22 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_23 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_24 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_25 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_26 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_27 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_28 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_29 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_30 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_31 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_32 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_33 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_34 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_35 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_36 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_37 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_38 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_39 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_40 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_41 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_42 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_43 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_44 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_45 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_46 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:20.092 size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:20.092 size: 0.000122 MiB name: rte_compressdev_data_47 00:06:20.092 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:20.092 end memzones------- 00:06:20.092 10:13:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:20.092 heap id: 0 total size: 814.000000 MiB number of busy elements: 633 number of free elements: 14 00:06:20.092 list of free elements. size: 11.782104 MiB 00:06:20.092 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:20.092 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:20.092 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:20.093 element at address: 0x200003e00000 with size: 0.996460 MiB 00:06:20.093 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:20.093 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:20.093 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:20.093 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:20.093 element at address: 0x20001aa00000 with size: 0.564941 MiB 00:06:20.093 element at address: 0x200003a00000 with size: 0.494324 MiB 00:06:20.093 element at address: 0x20000b200000 with size: 0.489441 MiB 00:06:20.093 element at address: 0x200000800000 with size: 0.486694 MiB 00:06:20.093 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:20.093 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:20.093 list of standard malloc elements. size: 199.897888 MiB 00:06:20.093 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:20.093 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:20.093 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:20.093 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:20.093 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:20.093 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:20.093 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:20.093 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:20.093 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000032f740 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000333200 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000033a780 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000033e240 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000341d00 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000349280 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000350800 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000357d80 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000035b840 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000035f300 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000366880 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000036a340 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000036de00 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000375380 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000378e40 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000037c900 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000383e80 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000387940 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000038b400 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000392980 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000396440 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000399f00 with size: 0.004395 MiB 00:06:20.093 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:06:20.093 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:06:20.093 element at address: 0x200000329b80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000032d640 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000331100 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000332180 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000335c40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000338680 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000339700 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000033c140 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000340c80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000344740 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000347180 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000348200 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000034e700 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000034f780 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000353240 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000355c80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000356d00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000359740 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000035d200 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000035e280 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000361d40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000364780 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000365800 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000368240 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000370840 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000373280 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000374300 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000376d40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000037a800 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000037b880 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000037f340 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000381d80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000382e00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000385840 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000389300 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000038a380 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000038de40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000390880 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000391900 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000394340 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000397e00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000398e80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000039c940 with size: 0.004028 MiB 00:06:20.093 element at address: 0x20000039f380 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:06:20.093 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:06:20.093 element at address: 0x200000200000 with size: 0.000305 MiB 00:06:20.094 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:20.094 element at address: 0x200000200140 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200200 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200380 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200440 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200500 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200680 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200740 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200800 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200980 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200a40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200b00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200c80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200d40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000200e00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000201000 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002052c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225580 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225640 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225700 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225880 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225940 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225a00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225b80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225c40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225d00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225e80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000225f40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226000 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226180 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226240 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226300 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226500 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226680 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226740 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226800 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226980 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226a40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226b00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226c80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226d40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226e00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000226f80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000227040 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000227100 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000329300 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000329580 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000329640 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000329800 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000032d040 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000032d100 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000330940 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000330b00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000330d80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000334400 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000334680 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000334840 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000338080 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000338140 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000338300 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033b980 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033f440 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033f600 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000033f880 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000342f00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000343180 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000343340 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000346b80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000346c40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000346e00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034a480 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034a640 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034a700 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034df40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034e100 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000034e380 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000351a00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000351c80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000351e40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000355680 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000355740 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000355900 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000358f80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000359140 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000359200 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000360500 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000360780 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000360940 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000364180 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000364240 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000364400 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000367a80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000367c40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000367d00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036b540 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036b700 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036b980 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036f000 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036f280 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000036f440 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000372c80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000372d40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000372f00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000376580 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000376740 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000376800 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037a040 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037a200 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037a480 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037db00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:06:20.094 element at address: 0x20000037df40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000381780 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000381840 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000381a00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000385080 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000385240 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000385300 with size: 0.000183 MiB 00:06:20.094 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000388b40 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000388d00 with size: 0.000183 MiB 00:06:20.094 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000388f80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000038c600 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000038c880 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000390280 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000390340 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000390500 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000393b80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000393d40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000393e00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000397640 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000397800 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200000397a80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039b100 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039b380 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039b540 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000039f000 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7e8c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:06:20.095 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:20.095 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:20.096 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:20.096 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:20.097 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:20.097 list of memzone associated elements. size: 602.320007 MiB 00:06:20.097 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:20.097 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:20.097 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:20.097 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:20.097 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:20.097 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1705791_0 00:06:20.097 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:20.097 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1705791_0 00:06:20.097 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:20.097 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1705791_0 00:06:20.097 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:20.097 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:20.097 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:20.097 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:20.097 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:20.097 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1705791 00:06:20.097 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:20.097 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1705791 00:06:20.097 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:06:20.097 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1705791 00:06:20.097 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:20.097 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:20.097 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:20.097 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:20.097 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:20.097 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:20.097 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:20.097 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:20.097 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:20.097 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1705791 00:06:20.097 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:20.097 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1705791 00:06:20.097 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:20.097 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1705791 00:06:20.097 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:20.097 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1705791 00:06:20.097 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:06:20.097 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1705791 00:06:20.097 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:20.097 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:20.097 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:20.097 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:20.097 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:20.097 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:20.097 element at address: 0x200000205380 with size: 0.125488 MiB 00:06:20.097 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1705791 00:06:20.097 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:20.097 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:20.097 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:20.097 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:20.097 element at address: 0x2000002010c0 with size: 0.016113 MiB 00:06:20.097 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1705791 00:06:20.097 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:20.097 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:20.097 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:06:20.097 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:06:20.097 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:06:20.097 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:06:20.097 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:06:20.097 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:06:20.097 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:06:20.097 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:06:20.097 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:06:20.097 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:06:20.097 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:06:20.097 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:06:20.097 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:06:20.097 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:06:20.097 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:06:20.097 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:06:20.097 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:06:20.097 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:06:20.097 element at address: 0x20000039b700 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:06:20.097 element at address: 0x200000397c40 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:06:20.097 element at address: 0x200000394180 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:06:20.097 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:06:20.097 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:06:20.097 element at address: 0x200000389140 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:06:20.097 element at address: 0x200000385680 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:06:20.097 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:06:20.097 element at address: 0x20000037e100 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:06:20.097 element at address: 0x20000037a640 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:06:20.097 element at address: 0x200000376b80 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:06:20.097 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:06:20.097 element at address: 0x20000036f600 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:06:20.097 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:06:20.097 element at address: 0x200000368080 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:06:20.097 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:06:20.097 element at address: 0x200000360b00 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:06:20.097 element at address: 0x20000035d040 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:06:20.097 element at address: 0x200000359580 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:06:20.097 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:06:20.097 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:06:20.098 element at address: 0x200000352000 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:06:20.098 element at address: 0x20000034e540 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:06:20.098 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:06:20.098 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:06:20.098 element at address: 0x200000343500 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:06:20.098 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:06:20.098 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:06:20.098 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:06:20.098 element at address: 0x200000334a00 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:06:20.098 element at address: 0x200000330f40 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:06:20.098 element at address: 0x20000032d480 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:06:20.098 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:06:20.098 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:06:20.098 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:06:20.098 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:06:20.098 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:06:20.098 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1705791 00:06:20.098 element at address: 0x200000200ec0 with size: 0.000305 MiB 00:06:20.098 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1705791 00:06:20.098 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:20.098 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:20.098 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:06:20.098 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:06:20.098 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:06:20.098 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:06:20.098 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:06:20.098 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:06:20.098 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:06:20.098 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:06:20.098 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:06:20.098 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:06:20.098 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:06:20.098 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:06:20.098 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:06:20.098 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:06:20.098 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:06:20.098 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:06:20.098 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:06:20.098 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:06:20.098 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:06:20.098 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:06:20.098 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:06:20.098 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:06:20.098 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:06:20.098 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:06:20.098 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:06:20.098 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:06:20.098 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:06:20.098 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:06:20.098 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:06:20.098 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:06:20.098 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:06:20.098 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:06:20.098 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:06:20.098 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:06:20.098 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:06:20.098 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:06:20.098 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:06:20.098 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:06:20.098 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:06:20.098 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:06:20.098 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:06:20.098 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:06:20.098 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:06:20.098 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:06:20.098 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:06:20.098 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:06:20.098 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:06:20.098 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:06:20.098 element at address: 0x20000039b600 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:06:20.098 element at address: 0x20000039b440 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:06:20.098 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:06:20.098 element at address: 0x200000397b40 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:06:20.098 element at address: 0x200000397980 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:06:20.098 element at address: 0x200000397700 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:06:20.098 element at address: 0x200000394080 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:06:20.098 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:06:20.098 element at address: 0x200000393c40 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:06:20.098 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:06:20.098 element at address: 0x200000390400 with size: 0.000244 MiB 00:06:20.098 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:06:20.098 element at address: 0x200000390180 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:06:20.099 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:06:20.099 element at address: 0x20000038c940 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:06:20.099 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:06:20.099 element at address: 0x200000389040 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:06:20.099 element at address: 0x200000388e80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:06:20.099 element at address: 0x200000388c00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:06:20.099 element at address: 0x200000385580 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:06:20.099 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:06:20.099 element at address: 0x200000385140 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:06:20.099 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:06:20.099 element at address: 0x200000381900 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:06:20.099 element at address: 0x200000381680 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:06:20.099 element at address: 0x20000037e000 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:06:20.099 element at address: 0x20000037de40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:06:20.099 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:06:20.099 element at address: 0x20000037a540 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:06:20.099 element at address: 0x20000037a380 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:06:20.099 element at address: 0x20000037a100 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:06:20.099 element at address: 0x200000376a80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:06:20.099 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:06:20.099 element at address: 0x200000376640 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:06:20.099 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:06:20.099 element at address: 0x200000372e00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:06:20.099 element at address: 0x200000372b80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:06:20.099 element at address: 0x20000036f500 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:06:20.099 element at address: 0x20000036f340 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:06:20.099 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:06:20.099 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:06:20.099 element at address: 0x20000036b880 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:06:20.099 element at address: 0x20000036b600 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:06:20.099 element at address: 0x200000367f80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:06:20.099 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:06:20.099 element at address: 0x200000367b40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:06:20.099 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:06:20.099 element at address: 0x200000364300 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:06:20.099 element at address: 0x200000364080 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:06:20.099 element at address: 0x200000360a00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:06:20.099 element at address: 0x200000360840 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:06:20.099 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:06:20.099 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:06:20.099 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:06:20.099 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:06:20.099 element at address: 0x200000359480 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:06:20.099 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:06:20.099 element at address: 0x200000359040 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:06:20.099 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:06:20.099 element at address: 0x200000355800 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:06:20.099 element at address: 0x200000355580 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:06:20.099 element at address: 0x200000351f00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:06:20.099 element at address: 0x200000351d40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:06:20.099 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:06:20.099 element at address: 0x20000034e440 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:06:20.099 element at address: 0x20000034e280 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:06:20.099 element at address: 0x20000034e000 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:06:20.099 element at address: 0x20000034a980 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:06:20.099 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:06:20.099 element at address: 0x20000034a540 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:06:20.099 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:06:20.099 element at address: 0x200000346d00 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:06:20.099 element at address: 0x200000346a80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:06:20.099 element at address: 0x200000343400 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:06:20.099 element at address: 0x200000343240 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:06:20.099 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:06:20.099 element at address: 0x20000033f940 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:06:20.099 element at address: 0x20000033f780 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:06:20.099 element at address: 0x20000033f500 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:06:20.099 element at address: 0x20000033be80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:06:20.099 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:06:20.099 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:06:20.099 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:06:20.099 element at address: 0x200000338200 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:06:20.099 element at address: 0x200000337f80 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:06:20.099 element at address: 0x200000334900 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:06:20.099 element at address: 0x200000334740 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:06:20.099 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:06:20.099 element at address: 0x200000330e40 with size: 0.000244 MiB 00:06:20.099 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:06:20.099 element at address: 0x200000330c80 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:06:20.100 element at address: 0x200000330a00 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:06:20.100 element at address: 0x20000032d380 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:06:20.100 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:06:20.100 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:06:20.100 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:06:20.100 element at address: 0x200000329700 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:06:20.100 element at address: 0x200000329480 with size: 0.000244 MiB 00:06:20.100 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:06:20.100 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:06:20.100 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:06:20.100 10:13:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:20.100 10:13:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1705791 00:06:20.100 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1705791 ']' 00:06:20.100 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1705791 00:06:20.100 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:20.100 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.359 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1705791 00:06:20.359 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.359 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.359 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1705791' 00:06:20.359 killing process with pid 1705791 00:06:20.359 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1705791 00:06:20.359 10:13:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1705791 00:06:20.618 00:06:20.618 real 0m1.412s 00:06:20.618 user 0m1.466s 00:06:20.618 sys 0m0.431s 00:06:20.618 10:13:45 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.618 10:13:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.618 ************************************ 00:06:20.618 END TEST dpdk_mem_utility 00:06:20.618 ************************************ 00:06:20.618 10:13:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.618 10:13:45 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:20.618 10:13:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.618 10:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.618 10:13:45 -- common/autotest_common.sh@10 -- # set +x 00:06:20.618 ************************************ 00:06:20.618 START TEST event 00:06:20.618 ************************************ 00:06:20.618 10:13:45 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:06:20.618 * Looking for test storage... 00:06:20.877 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:06:20.877 10:13:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:20.877 10:13:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.877 10:13:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.877 10:13:45 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.877 10:13:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.877 10:13:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.877 ************************************ 00:06:20.877 START TEST event_perf 00:06:20.877 ************************************ 00:06:20.877 10:13:45 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.877 Running I/O for 1 seconds...[2024-07-15 10:13:45.478297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:20.877 [2024-07-15 10:13:45.478367] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706113 ] 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:20.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:20.877 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:20.877 [2024-07-15 10:13:45.569171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.877 [2024-07-15 10:13:45.641787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.877 [2024-07-15 10:13:45.641884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.877 [2024-07-15 10:13:45.641968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.877 [2024-07-15 10:13:45.641970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.316 Running I/O for 1 seconds... 00:06:22.316 lcore 0: 217782 00:06:22.316 lcore 1: 217782 00:06:22.316 lcore 2: 217781 00:06:22.316 lcore 3: 217781 00:06:22.316 done. 00:06:22.316 00:06:22.316 real 0m1.260s 00:06:22.316 user 0m4.146s 00:06:22.316 sys 0m0.109s 00:06:22.316 10:13:46 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.316 10:13:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.316 ************************************ 00:06:22.316 END TEST event_perf 00:06:22.316 ************************************ 00:06:22.316 10:13:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:22.316 10:13:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:22.316 10:13:46 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:22.316 10:13:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.316 10:13:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.316 ************************************ 00:06:22.316 START TEST event_reactor 00:06:22.316 ************************************ 00:06:22.316 10:13:46 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:22.316 [2024-07-15 10:13:46.820522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.316 [2024-07-15 10:13:46.820580] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706399 ] 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:22.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.316 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:22.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:22.317 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:22.317 [2024-07-15 10:13:46.913681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.317 [2024-07-15 10:13:46.978654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.307 test_start 00:06:23.307 oneshot 00:06:23.307 tick 100 00:06:23.307 tick 100 00:06:23.307 tick 250 00:06:23.307 tick 100 00:06:23.307 tick 100 00:06:23.307 tick 250 00:06:23.307 tick 100 00:06:23.307 tick 500 00:06:23.307 tick 100 00:06:23.307 tick 100 00:06:23.307 tick 250 00:06:23.307 tick 100 00:06:23.307 tick 100 00:06:23.307 test_end 00:06:23.307 00:06:23.307 real 0m1.249s 00:06:23.307 user 0m1.139s 00:06:23.307 sys 0m0.106s 00:06:23.307 10:13:48 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.307 10:13:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.307 ************************************ 00:06:23.307 END TEST event_reactor 00:06:23.307 ************************************ 00:06:23.307 10:13:48 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.307 10:13:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.307 10:13:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.307 10:13:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.307 10:13:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.564 ************************************ 00:06:23.564 START TEST event_reactor_perf 00:06:23.564 ************************************ 00:06:23.565 10:13:48 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.565 [2024-07-15 10:13:48.154771] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:23.565 [2024-07-15 10:13:48.154832] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706685 ] 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:23.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:23.565 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:23.565 [2024-07-15 10:13:48.246156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.565 [2024-07-15 10:13:48.315409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.939 test_start 00:06:24.939 test_end 00:06:24.939 Performance: 530837 events per second 00:06:24.939 00:06:24.939 real 0m1.256s 00:06:24.939 user 0m1.147s 00:06:24.939 sys 0m0.104s 00:06:24.939 10:13:49 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.939 10:13:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.939 ************************************ 00:06:24.939 END TEST event_reactor_perf 00:06:24.939 ************************************ 00:06:24.939 10:13:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.939 10:13:49 event -- event/event.sh@49 -- # uname -s 00:06:24.939 10:13:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.939 10:13:49 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.939 10:13:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.939 10:13:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.939 10:13:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.939 ************************************ 00:06:24.939 START TEST event_scheduler 00:06:24.939 ************************************ 00:06:24.939 10:13:49 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:24.939 * Looking for test storage... 00:06:24.939 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:06:24.939 10:13:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.939 10:13:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1706992 00:06:24.939 10:13:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.939 10:13:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.939 10:13:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1706992 00:06:24.939 10:13:49 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1706992 ']' 00:06:24.939 10:13:49 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.939 10:13:49 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.939 10:13:49 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.940 10:13:49 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.940 10:13:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.940 [2024-07-15 10:13:49.595493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:24.940 [2024-07-15 10:13:49.595541] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706992 ] 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:24.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.940 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:24.940 [2024-07-15 10:13:49.685705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.198 [2024-07-15 10:13:49.757798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.198 [2024-07-15 10:13:49.757883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.198 [2024-07-15 10:13:49.757976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.198 [2024-07-15 10:13:49.757979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:25.765 10:13:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.765 [2024-07-15 10:13:50.396308] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:25.765 [2024-07-15 10:13:50.396330] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:25.765 [2024-07-15 10:13:50.396342] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:25.765 [2024-07-15 10:13:50.396349] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:25.765 [2024-07-15 10:13:50.396357] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.765 10:13:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.765 [2024-07-15 10:13:50.479178] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.765 10:13:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.765 ************************************ 00:06:25.765 START TEST scheduler_create_thread 00:06:25.765 ************************************ 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.765 2 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.765 3 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.765 4 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.765 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 5 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 6 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 7 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 8 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 9 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 10 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.023 10:13:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.394 10:13:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.394 10:13:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.394 10:13:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.394 10:13:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.394 10:13:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.764 10:13:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.764 00:06:28.764 real 0m2.616s 00:06:28.764 user 0m0.025s 00:06:28.764 sys 0m0.005s 00:06:28.764 10:13:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.764 10:13:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.764 ************************************ 00:06:28.764 END TEST scheduler_create_thread 00:06:28.764 ************************************ 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:28.764 10:13:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:28.764 10:13:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1706992 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1706992 ']' 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1706992 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1706992 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:28.764 10:13:53 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:28.765 10:13:53 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1706992' 00:06:28.765 killing process with pid 1706992 00:06:28.765 10:13:53 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1706992 00:06:28.765 10:13:53 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1706992 00:06:29.022 [2024-07-15 10:13:53.617316] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:29.022 00:06:29.022 real 0m4.335s 00:06:29.022 user 0m8.031s 00:06:29.022 sys 0m0.434s 00:06:29.022 10:13:53 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.022 10:13:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.022 ************************************ 00:06:29.022 END TEST event_scheduler 00:06:29.022 ************************************ 00:06:29.279 10:13:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:29.279 10:13:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:29.279 10:13:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:29.279 10:13:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.279 10:13:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.279 10:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.279 ************************************ 00:06:29.279 START TEST app_repeat 00:06:29.279 ************************************ 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1707718 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1707718' 00:06:29.279 Process app_repeat pid: 1707718 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:29.279 spdk_app_start Round 0 00:06:29.279 10:13:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1707718 /var/tmp/spdk-nbd.sock 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1707718 ']' 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.279 10:13:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.279 [2024-07-15 10:13:53.927648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:29.279 [2024-07-15 10:13:53.927690] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707718 ] 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:29.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.279 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:29.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.280 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:29.280 [2024-07-15 10:13:54.014305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.537 [2024-07-15 10:13:54.091290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.537 [2024-07-15 10:13:54.091293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.101 10:13:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.101 10:13:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:30.101 10:13:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.358 Malloc0 00:06:30.358 10:13:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.358 Malloc1 00:06:30.358 10:13:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.358 10:13:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.616 /dev/nbd0 00:06:30.616 10:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.616 10:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.616 1+0 records in 00:06:30.616 1+0 records out 00:06:30.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228585 s, 17.9 MB/s 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.616 10:13:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.616 10:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.616 10:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.616 10:13:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.873 /dev/nbd1 00:06:30.873 10:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.873 10:13:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.873 1+0 records in 00:06:30.873 1+0 records out 00:06:30.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235676 s, 17.4 MB/s 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.873 10:13:55 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.873 10:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.873 10:13:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.874 10:13:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.874 10:13:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.874 10:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.131 10:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.131 { 00:06:31.131 "nbd_device": "/dev/nbd0", 00:06:31.131 "bdev_name": "Malloc0" 00:06:31.131 }, 00:06:31.132 { 00:06:31.132 "nbd_device": "/dev/nbd1", 00:06:31.132 "bdev_name": "Malloc1" 00:06:31.132 } 00:06:31.132 ]' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.132 { 00:06:31.132 "nbd_device": "/dev/nbd0", 00:06:31.132 "bdev_name": "Malloc0" 00:06:31.132 }, 00:06:31.132 { 00:06:31.132 "nbd_device": "/dev/nbd1", 00:06:31.132 "bdev_name": "Malloc1" 00:06:31.132 } 00:06:31.132 ]' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.132 /dev/nbd1' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.132 /dev/nbd1' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.132 256+0 records in 00:06:31.132 256+0 records out 00:06:31.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00528482 s, 198 MB/s 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.132 256+0 records in 00:06:31.132 256+0 records out 00:06:31.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163354 s, 64.2 MB/s 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.132 256+0 records in 00:06:31.132 256+0 records out 00:06:31.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181234 s, 57.9 MB/s 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.132 10:13:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.390 10:13:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.648 10:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.907 10:13:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.907 10:13:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.907 10:13:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.165 [2024-07-15 10:13:56.853599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.165 [2024-07-15 10:13:56.917425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.165 [2024-07-15 10:13:56.917429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.422 [2024-07-15 10:13:56.958655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.422 [2024-07-15 10:13:56.958692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.953 10:13:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.953 10:13:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:34.953 spdk_app_start Round 1 00:06:34.953 10:13:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1707718 /var/tmp/spdk-nbd.sock 00:06:34.953 10:13:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1707718 ']' 00:06:34.953 10:13:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.953 10:13:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.953 10:13:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.953 10:13:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.953 10:13:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.212 10:13:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.212 10:13:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.212 10:13:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.212 Malloc0 00:06:35.470 10:14:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.470 Malloc1 00:06:35.470 10:14:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.470 10:14:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.728 /dev/nbd0 00:06:35.728 10:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.728 10:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.728 1+0 records in 00:06:35.728 1+0 records out 00:06:35.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130277 s, 31.4 MB/s 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.728 10:14:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.728 10:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.728 10:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.728 10:14:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:35.986 /dev/nbd1 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.986 1+0 records in 00:06:35.986 1+0 records out 00:06:35.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146073 s, 28.0 MB/s 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.986 10:14:00 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.986 10:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.242 { 00:06:36.242 "nbd_device": "/dev/nbd0", 00:06:36.242 "bdev_name": "Malloc0" 00:06:36.242 }, 00:06:36.242 { 00:06:36.242 "nbd_device": "/dev/nbd1", 00:06:36.242 "bdev_name": "Malloc1" 00:06:36.242 } 00:06:36.242 ]' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.242 { 00:06:36.242 "nbd_device": "/dev/nbd0", 00:06:36.242 "bdev_name": "Malloc0" 00:06:36.242 }, 00:06:36.242 { 00:06:36.242 "nbd_device": "/dev/nbd1", 00:06:36.242 "bdev_name": "Malloc1" 00:06:36.242 } 00:06:36.242 ]' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.242 /dev/nbd1' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.242 /dev/nbd1' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.242 256+0 records in 00:06:36.242 256+0 records out 00:06:36.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113733 s, 92.2 MB/s 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.242 256+0 records in 00:06:36.242 256+0 records out 00:06:36.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178799 s, 58.6 MB/s 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.242 256+0 records in 00:06:36.242 256+0 records out 00:06:36.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178825 s, 58.6 MB/s 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.242 10:14:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.499 10:14:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:36.758 10:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.032 10:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.032 10:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.032 10:14:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.032 10:14:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.032 10:14:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.032 10:14:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.032 10:14:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.032 10:14:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.307 [2024-07-15 10:14:01.945142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.307 [2024-07-15 10:14:02.008251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.307 [2024-07-15 10:14:02.008255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.307 [2024-07-15 10:14:02.050400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.307 [2024-07-15 10:14:02.050444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.584 10:14:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.584 10:14:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.584 spdk_app_start Round 2 00:06:40.584 10:14:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1707718 /var/tmp/spdk-nbd.sock 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1707718 ']' 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.584 10:14:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:40.584 10:14:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.584 Malloc0 00:06:40.584 10:14:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.584 Malloc1 00:06:40.584 10:14:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.584 10:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.842 /dev/nbd0 00:06:40.842 10:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.842 10:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.842 1+0 records in 00:06:40.842 1+0 records out 00:06:40.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219068 s, 18.7 MB/s 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.842 10:14:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.842 10:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.842 10:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.842 10:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.100 /dev/nbd1 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.100 1+0 records in 00:06:41.100 1+0 records out 00:06:41.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260088 s, 15.7 MB/s 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.100 10:14:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.100 { 00:06:41.100 "nbd_device": "/dev/nbd0", 00:06:41.100 "bdev_name": "Malloc0" 00:06:41.100 }, 00:06:41.100 { 00:06:41.100 "nbd_device": "/dev/nbd1", 00:06:41.100 "bdev_name": "Malloc1" 00:06:41.100 } 00:06:41.100 ]' 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.100 { 00:06:41.100 "nbd_device": "/dev/nbd0", 00:06:41.100 "bdev_name": "Malloc0" 00:06:41.100 }, 00:06:41.100 { 00:06:41.100 "nbd_device": "/dev/nbd1", 00:06:41.100 "bdev_name": "Malloc1" 00:06:41.100 } 00:06:41.100 ]' 00:06:41.100 10:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.357 /dev/nbd1' 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.357 /dev/nbd1' 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.357 10:14:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.358 256+0 records in 00:06:41.358 256+0 records out 00:06:41.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011435 s, 91.7 MB/s 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.358 256+0 records in 00:06:41.358 256+0 records out 00:06:41.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159698 s, 65.7 MB/s 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.358 256+0 records in 00:06:41.358 256+0 records out 00:06:41.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198831 s, 52.7 MB/s 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.358 10:14:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.358 10:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.617 10:14:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.875 10:14:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.875 10:14:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.133 10:14:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.391 [2024-07-15 10:14:07.022042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.391 [2024-07-15 10:14:07.085064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.391 [2024-07-15 10:14:07.085067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.391 [2024-07-15 10:14:07.126215] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.391 [2024-07-15 10:14:07.126257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.668 10:14:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1707718 /var/tmp/spdk-nbd.sock 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1707718 ']' 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:45.668 10:14:09 event.app_repeat -- event/event.sh@39 -- # killprocess 1707718 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1707718 ']' 00:06:45.668 10:14:09 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1707718 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1707718 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1707718' 00:06:45.668 killing process with pid 1707718 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1707718 00:06:45.668 10:14:10 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1707718 00:06:45.668 spdk_app_start is called in Round 0. 00:06:45.668 Shutdown signal received, stop current app iteration 00:06:45.668 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:45.668 spdk_app_start is called in Round 1. 00:06:45.668 Shutdown signal received, stop current app iteration 00:06:45.668 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:45.668 spdk_app_start is called in Round 2. 00:06:45.668 Shutdown signal received, stop current app iteration 00:06:45.668 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:45.668 spdk_app_start is called in Round 3. 00:06:45.668 Shutdown signal received, stop current app iteration 00:06:45.668 10:14:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:45.668 10:14:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:45.668 00:06:45.668 real 0m16.317s 00:06:45.668 user 0m34.603s 00:06:45.668 sys 0m3.090s 00:06:45.669 10:14:10 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.669 10:14:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.669 ************************************ 00:06:45.669 END TEST app_repeat 00:06:45.669 ************************************ 00:06:45.669 10:14:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.669 10:14:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:45.669 00:06:45.669 real 0m24.960s 00:06:45.669 user 0m49.249s 00:06:45.669 sys 0m4.245s 00:06:45.669 10:14:10 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.669 10:14:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.669 ************************************ 00:06:45.669 END TEST event 00:06:45.669 ************************************ 00:06:45.669 10:14:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.669 10:14:10 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:45.669 10:14:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.669 10:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.669 10:14:10 -- common/autotest_common.sh@10 -- # set +x 00:06:45.669 ************************************ 00:06:45.669 START TEST thread 00:06:45.669 ************************************ 00:06:45.669 10:14:10 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:45.669 * Looking for test storage... 00:06:45.669 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:45.669 10:14:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.669 10:14:10 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:45.669 10:14:10 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.669 10:14:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.927 ************************************ 00:06:45.927 START TEST thread_poller_perf 00:06:45.927 ************************************ 00:06:45.927 10:14:10 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.927 [2024-07-15 10:14:10.513937] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:45.927 [2024-07-15 10:14:10.514001] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710725 ] 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:45.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:45.927 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:45.927 [2024-07-15 10:14:10.604496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.927 [2024-07-15 10:14:10.673706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.927 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.299 ====================================== 00:06:47.299 busy:2507956292 (cyc) 00:06:47.299 total_run_count: 435000 00:06:47.299 tsc_hz: 2500000000 (cyc) 00:06:47.299 ====================================== 00:06:47.299 poller_cost: 5765 (cyc), 2306 (nsec) 00:06:47.299 00:06:47.299 real 0m1.252s 00:06:47.299 user 0m1.138s 00:06:47.299 sys 0m0.109s 00:06:47.299 10:14:11 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.299 10:14:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.299 ************************************ 00:06:47.299 END TEST thread_poller_perf 00:06:47.299 ************************************ 00:06:47.299 10:14:11 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:47.299 10:14:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.299 10:14:11 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:47.299 10:14:11 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.299 10:14:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.299 ************************************ 00:06:47.299 START TEST thread_poller_perf 00:06:47.299 ************************************ 00:06:47.299 10:14:11 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.299 [2024-07-15 10:14:11.855284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.299 [2024-07-15 10:14:11.855349] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711013 ] 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:47.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:47.299 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:47.299 [2024-07-15 10:14:11.946749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.299 [2024-07-15 10:14:12.015104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.299 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.671 ====================================== 00:06:48.671 busy:2501857516 (cyc) 00:06:48.671 total_run_count: 5710000 00:06:48.671 tsc_hz: 2500000000 (cyc) 00:06:48.671 ====================================== 00:06:48.671 poller_cost: 438 (cyc), 175 (nsec) 00:06:48.671 00:06:48.671 real 0m1.254s 00:06:48.671 user 0m1.144s 00:06:48.671 sys 0m0.105s 00:06:48.671 10:14:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.671 10:14:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 ************************************ 00:06:48.671 END TEST thread_poller_perf 00:06:48.671 ************************************ 00:06:48.671 10:14:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:48.671 10:14:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.671 00:06:48.671 real 0m2.781s 00:06:48.671 user 0m2.372s 00:06:48.671 sys 0m0.419s 00:06:48.671 10:14:13 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.671 10:14:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 ************************************ 00:06:48.671 END TEST thread 00:06:48.671 ************************************ 00:06:48.671 10:14:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.671 10:14:13 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:48.671 10:14:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.671 10:14:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.671 10:14:13 -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 ************************************ 00:06:48.671 START TEST accel 00:06:48.671 ************************************ 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:48.671 * Looking for test storage... 00:06:48.671 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:48.671 10:14:13 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:48.671 10:14:13 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:48.671 10:14:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.671 10:14:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:48.671 10:14:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1711334 00:06:48.671 10:14:13 accel -- accel/accel.sh@63 -- # waitforlisten 1711334 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@829 -- # '[' -z 1711334 ']' 00:06:48.671 10:14:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.671 10:14:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.671 10:14:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.671 10:14:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.671 10:14:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.671 10:14:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.671 10:14:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.671 10:14:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:48.671 10:14:13 accel -- accel/accel.sh@41 -- # jq -r . 00:06:48.671 [2024-07-15 10:14:13.346331] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:48.671 [2024-07-15 10:14:13.346380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711334 ] 00:06:48.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.671 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:48.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.671 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:48.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.671 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:48.671 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.671 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:48.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:48.672 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:48.672 [2024-07-15 10:14:13.436640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.929 [2024-07-15 10:14:13.508589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@862 -- # return 0 00:06:49.493 10:14:14 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:49.493 10:14:14 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:49.493 10:14:14 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:49.493 10:14:14 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:49.493 10:14:14 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:49.493 10:14:14 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:49.493 10:14:14 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.493 10:14:14 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.493 10:14:14 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.493 10:14:14 accel -- accel/accel.sh@75 -- # killprocess 1711334 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@948 -- # '[' -z 1711334 ']' 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@952 -- # kill -0 1711334 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@953 -- # uname 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1711334 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1711334' 00:06:49.493 killing process with pid 1711334 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@967 -- # kill 1711334 00:06:49.493 10:14:14 accel -- common/autotest_common.sh@972 -- # wait 1711334 00:06:50.059 10:14:14 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:50.059 10:14:14 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.059 10:14:14 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:50.059 10:14:14 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:50.059 10:14:14 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.059 10:14:14 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.059 10:14:14 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.059 10:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.059 ************************************ 00:06:50.059 START TEST accel_missing_filename 00:06:50.059 ************************************ 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.059 10:14:14 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:50.059 10:14:14 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:50.059 [2024-07-15 10:14:14.711953] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:50.059 [2024-07-15 10:14:14.711995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711638 ] 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:50.059 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.059 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:50.059 [2024-07-15 10:14:14.800092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.321 [2024-07-15 10:14:14.870186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.321 [2024-07-15 10:14:14.923394] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.321 [2024-07-15 10:14:14.983787] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:50.321 A filename is required. 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.321 00:06:50.321 real 0m0.361s 00:06:50.321 user 0m0.251s 00:06:50.321 sys 0m0.140s 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.321 10:14:15 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:50.321 ************************************ 00:06:50.321 END TEST accel_missing_filename 00:06:50.321 ************************************ 00:06:50.321 10:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.321 10:14:15 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:50.321 10:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:50.321 10:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.321 10:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.579 ************************************ 00:06:50.579 START TEST accel_compress_verify 00:06:50.579 ************************************ 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.579 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:50.579 10:14:15 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:50.579 [2024-07-15 10:14:15.154498] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:50.579 [2024-07-15 10:14:15.154538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711663 ] 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.579 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.579 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.579 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.579 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.579 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.579 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:50.579 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:50.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:50.580 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:50.580 [2024-07-15 10:14:15.244189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.580 [2024-07-15 10:14:15.313197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.580 [2024-07-15 10:14:15.367167] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.838 [2024-07-15 10:14:15.427958] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:50.838 00:06:50.838 Compression does not support the verify option, aborting. 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.838 00:06:50.838 real 0m0.367s 00:06:50.838 user 0m0.229s 00:06:50.838 sys 0m0.150s 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.838 10:14:15 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:50.838 ************************************ 00:06:50.838 END TEST accel_compress_verify 00:06:50.838 ************************************ 00:06:50.838 10:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.838 10:14:15 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.838 10:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.838 10:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.838 10:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.838 ************************************ 00:06:50.838 START TEST accel_wrong_workload 00:06:50.838 ************************************ 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.838 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:50.838 10:14:15 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:50.838 Unsupported workload type: foobar 00:06:50.838 [2024-07-15 10:14:15.610198] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.838 accel_perf options: 00:06:50.838 [-h help message] 00:06:50.838 [-q queue depth per core] 00:06:50.838 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.838 [-T number of threads per core 00:06:50.838 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.838 [-t time in seconds] 00:06:50.838 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.838 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:50.838 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.838 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.838 [-S for crc32c workload, use this seed value (default 0) 00:06:50.838 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.838 [-f for fill workload, use this BYTE value (default 255) 00:06:50.839 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.839 [-y verify result if this switch is on] 00:06:50.839 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.839 Can be used to spread operations across a wider range of memory. 00:06:50.839 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:50.839 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.839 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.839 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.839 00:06:50.839 real 0m0.038s 00:06:50.839 user 0m0.027s 00:06:50.839 sys 0m0.011s 00:06:50.839 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.839 10:14:15 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:50.839 ************************************ 00:06:50.839 END TEST accel_wrong_workload 00:06:50.839 ************************************ 00:06:50.839 Error: writing output failed: Broken pipe 00:06:51.097 10:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.097 10:14:15 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.097 10:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:51.097 10:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.097 10:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.097 ************************************ 00:06:51.097 START TEST accel_negative_buffers 00:06:51.097 ************************************ 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:51.097 10:14:15 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:51.097 -x option must be non-negative. 00:06:51.097 [2024-07-15 10:14:15.712855] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:51.097 accel_perf options: 00:06:51.097 [-h help message] 00:06:51.097 [-q queue depth per core] 00:06:51.097 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:51.097 [-T number of threads per core 00:06:51.097 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:51.097 [-t time in seconds] 00:06:51.097 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:51.097 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:51.097 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:51.097 [-l for compress/decompress workloads, name of uncompressed input file 00:06:51.097 [-S for crc32c workload, use this seed value (default 0) 00:06:51.097 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:51.097 [-f for fill workload, use this BYTE value (default 255) 00:06:51.097 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:51.097 [-y verify result if this switch is on] 00:06:51.097 [-a tasks to allocate per core (default: same value as -q)] 00:06:51.097 Can be used to spread operations across a wider range of memory. 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.097 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.098 00:06:51.098 real 0m0.042s 00:06:51.098 user 0m0.025s 00:06:51.098 sys 0m0.016s 00:06:51.098 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.098 10:14:15 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:51.098 ************************************ 00:06:51.098 END TEST accel_negative_buffers 00:06:51.098 ************************************ 00:06:51.098 Error: writing output failed: Broken pipe 00:06:51.098 10:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.098 10:14:15 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:51.098 10:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.098 10:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.098 10:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.098 ************************************ 00:06:51.098 START TEST accel_crc32c 00:06:51.098 ************************************ 00:06:51.098 10:14:15 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:51.098 10:14:15 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:51.098 [2024-07-15 10:14:15.823934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:51.098 [2024-07-15 10:14:15.823989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711921 ] 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:51.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:51.098 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:51.357 [2024-07-15 10:14:15.915160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.357 [2024-07-15 10:14:15.983237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.357 10:14:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:52.730 10:14:17 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.730 00:06:52.730 real 0m1.386s 00:06:52.730 user 0m1.237s 00:06:52.730 sys 0m0.156s 00:06:52.730 10:14:17 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.730 10:14:17 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:52.730 ************************************ 00:06:52.730 END TEST accel_crc32c 00:06:52.730 ************************************ 00:06:52.730 10:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.730 10:14:17 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:52.730 10:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:52.730 10:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.730 10:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.730 ************************************ 00:06:52.730 START TEST accel_crc32c_C2 00:06:52.730 ************************************ 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:52.730 [2024-07-15 10:14:17.280114] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:52.730 [2024-07-15 10:14:17.280158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712181 ] 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:52.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.730 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:52.730 [2024-07-15 10:14:17.363223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.730 [2024-07-15 10:14:17.432593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.730 10:14:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.173 00:06:54.173 real 0m1.360s 00:06:54.173 user 0m1.238s 00:06:54.173 sys 0m0.132s 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.173 10:14:18 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:54.173 ************************************ 00:06:54.173 END TEST accel_crc32c_C2 00:06:54.173 ************************************ 00:06:54.173 10:14:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.173 10:14:18 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:54.173 10:14:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.173 10:14:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.173 10:14:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.173 ************************************ 00:06:54.173 START TEST accel_copy 00:06:54.173 ************************************ 00:06:54.173 10:14:18 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:54.173 10:14:18 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:54.173 [2024-07-15 10:14:18.732227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:54.173 [2024-07-15 10:14:18.732284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712418 ] 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.173 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:54.173 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.174 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:54.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.174 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:54.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.174 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:54.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:54.174 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:54.174 [2024-07-15 10:14:18.821113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.174 [2024-07-15 10:14:18.890634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.174 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.429 10:14:18 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:55.358 10:14:20 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.358 00:06:55.358 real 0m1.392s 00:06:55.358 user 0m1.241s 00:06:55.358 sys 0m0.152s 00:06:55.358 10:14:20 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.358 10:14:20 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.358 ************************************ 00:06:55.358 END TEST accel_copy 00:06:55.358 ************************************ 00:06:55.358 10:14:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.358 10:14:20 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.358 10:14:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:55.358 10:14:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.358 10:14:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 ************************************ 00:06:55.616 START TEST accel_fill 00:06:55.616 ************************************ 00:06:55.616 10:14:20 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:55.616 10:14:20 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:55.616 [2024-07-15 10:14:20.188437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:55.616 [2024-07-15 10:14:20.188493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712653 ] 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:55.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.616 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:55.616 [2024-07-15 10:14:20.279597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.616 [2024-07-15 10:14:20.352112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.873 10:14:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.818 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:56.819 10:14:21 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.819 00:06:56.819 real 0m1.394s 00:06:56.819 user 0m1.242s 00:06:56.819 sys 0m0.154s 00:06:56.819 10:14:21 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.819 10:14:21 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:56.819 ************************************ 00:06:56.819 END TEST accel_fill 00:06:56.819 ************************************ 00:06:56.819 10:14:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.819 10:14:21 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:56.819 10:14:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.819 10:14:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.819 10:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.075 ************************************ 00:06:57.075 START TEST accel_copy_crc32c 00:06:57.075 ************************************ 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:57.075 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:57.075 [2024-07-15 10:14:21.649466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.075 [2024-07-15 10:14:21.649509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712893 ] 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.075 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:57.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:57.076 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.076 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:57.076 [2024-07-15 10:14:21.732950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.076 [2024-07-15 10:14:21.802514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.076 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.332 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.333 10:14:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:58.265 10:14:22 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.265 00:06:58.265 real 0m1.364s 00:06:58.265 user 0m1.233s 00:06:58.266 sys 0m0.139s 00:06:58.266 10:14:22 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.266 10:14:22 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.266 ************************************ 00:06:58.266 END TEST accel_copy_crc32c 00:06:58.266 ************************************ 00:06:58.266 10:14:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.266 10:14:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.266 10:14:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.266 10:14:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.266 10:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.523 ************************************ 00:06:58.523 START TEST accel_copy_crc32c_C2 00:06:58.523 ************************************ 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.523 [2024-07-15 10:14:23.090968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.523 [2024-07-15 10:14:23.091010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713146 ] 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:58.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:58.523 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:58.523 [2024-07-15 10:14:23.176882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.523 [2024-07-15 10:14:23.246840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.523 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.524 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.524 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.524 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.524 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.524 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.780 10:14:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.712 00:06:59.712 real 0m1.373s 00:06:59.712 user 0m1.234s 00:06:59.712 sys 0m0.138s 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.712 10:14:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.712 ************************************ 00:06:59.712 END TEST accel_copy_crc32c_C2 00:06:59.712 ************************************ 00:06:59.712 10:14:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.712 10:14:24 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:59.712 10:14:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.712 10:14:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.712 10:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.969 ************************************ 00:06:59.969 START TEST accel_dualcast 00:06:59.969 ************************************ 00:06:59.969 10:14:24 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:59.969 10:14:24 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:59.969 [2024-07-15 10:14:24.547631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:59.969 [2024-07-15 10:14:24.547680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713422 ] 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.969 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:59.969 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:59.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:59.970 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:59.970 [2024-07-15 10:14:24.636613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.970 [2024-07-15 10:14:24.706008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:00.227 10:14:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:01.158 10:14:25 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.158 00:07:01.158 real 0m1.385s 00:07:01.158 user 0m1.227s 00:07:01.158 sys 0m0.155s 00:07:01.158 10:14:25 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.158 10:14:25 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:01.158 ************************************ 00:07:01.158 END TEST accel_dualcast 00:07:01.158 ************************************ 00:07:01.158 10:14:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.416 10:14:25 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:01.416 10:14:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.416 10:14:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.416 10:14:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.416 ************************************ 00:07:01.416 START TEST accel_compare 00:07:01.416 ************************************ 00:07:01.416 10:14:25 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:01.416 10:14:25 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:01.416 [2024-07-15 10:14:26.007054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:01.416 [2024-07-15 10:14:26.007102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713710 ] 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.416 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:01.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:01.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.417 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:01.417 [2024-07-15 10:14:26.090378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.417 [2024-07-15 10:14:26.159535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.674 10:14:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.675 10:14:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.675 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.675 10:14:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:02.608 10:14:27 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.608 00:07:02.608 real 0m1.368s 00:07:02.608 user 0m1.230s 00:07:02.608 sys 0m0.138s 00:07:02.608 10:14:27 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.608 10:14:27 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:02.608 ************************************ 00:07:02.608 END TEST accel_compare 00:07:02.608 ************************************ 00:07:02.866 10:14:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.866 10:14:27 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:02.866 10:14:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.866 10:14:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.866 10:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 ************************************ 00:07:02.866 START TEST accel_xor 00:07:02.866 ************************************ 00:07:02.866 10:14:27 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.866 10:14:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.866 [2024-07-15 10:14:27.467841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.866 [2024-07-15 10:14:27.467891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713987 ] 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:02.866 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:02.866 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:02.866 [2024-07-15 10:14:27.556683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.866 [2024-07-15 10:14:27.624880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.124 10:14:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:04.057 10:14:28 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.057 00:07:04.057 real 0m1.382s 00:07:04.057 user 0m1.238s 00:07:04.057 sys 0m0.149s 00:07:04.057 10:14:28 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.057 10:14:28 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:04.057 ************************************ 00:07:04.057 END TEST accel_xor 00:07:04.057 ************************************ 00:07:04.315 10:14:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.315 10:14:28 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:04.315 10:14:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:04.315 10:14:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.315 10:14:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.315 ************************************ 00:07:04.315 START TEST accel_xor 00:07:04.315 ************************************ 00:07:04.315 10:14:28 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:04.315 10:14:28 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:04.315 [2024-07-15 10:14:28.932911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:04.315 [2024-07-15 10:14:28.932978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714268 ] 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:04.315 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:04.315 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:04.315 [2024-07-15 10:14:29.025478] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.315 [2024-07-15 10:14:29.092334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.573 10:14:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:05.504 10:14:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.504 00:07:05.504 real 0m1.386s 00:07:05.504 user 0m1.238s 00:07:05.504 sys 0m0.153s 00:07:05.504 10:14:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.504 10:14:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:05.504 ************************************ 00:07:05.504 END TEST accel_xor 00:07:05.504 ************************************ 00:07:05.762 10:14:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.762 10:14:30 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:05.762 10:14:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:05.763 10:14:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.763 10:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.763 ************************************ 00:07:05.763 START TEST accel_dif_verify 00:07:05.763 ************************************ 00:07:05.763 10:14:30 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.763 10:14:30 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.763 [2024-07-15 10:14:30.388448] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.763 [2024-07-15 10:14:30.388491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714556 ] 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:05.763 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.763 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:05.763 [2024-07-15 10:14:30.471261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.763 [2024-07-15 10:14:30.539887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.020 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.020 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.020 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.021 10:14:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:06.953 10:14:31 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.953 00:07:06.953 real 0m1.361s 00:07:06.953 user 0m1.241s 00:07:06.953 sys 0m0.129s 00:07:06.953 10:14:31 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.953 10:14:31 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:06.953 ************************************ 00:07:06.953 END TEST accel_dif_verify 00:07:06.953 ************************************ 00:07:07.221 10:14:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.221 10:14:31 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:07.221 10:14:31 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:07.221 10:14:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.221 10:14:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.221 ************************************ 00:07:07.221 START TEST accel_dif_generate 00:07:07.221 ************************************ 00:07:07.221 10:14:31 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:07.221 10:14:31 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:07.221 [2024-07-15 10:14:31.840316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.221 [2024-07-15 10:14:31.840371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714835 ] 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:07.221 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.221 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:07.221 [2024-07-15 10:14:31.928319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.221 [2024-07-15 10:14:31.997693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.537 10:14:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:08.469 10:14:33 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.469 00:07:08.469 real 0m1.384s 00:07:08.469 user 0m1.228s 00:07:08.469 sys 0m0.158s 00:07:08.469 10:14:33 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.469 10:14:33 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:08.469 ************************************ 00:07:08.469 END TEST accel_dif_generate 00:07:08.469 ************************************ 00:07:08.469 10:14:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.469 10:14:33 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:08.469 10:14:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:08.469 10:14:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.469 10:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.726 ************************************ 00:07:08.726 START TEST accel_dif_generate_copy 00:07:08.726 ************************************ 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:08.726 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:08.726 [2024-07-15 10:14:33.302313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.726 [2024-07-15 10:14:33.302358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715124 ] 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.726 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:08.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:08.727 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:08.727 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:08.727 [2024-07-15 10:14:33.390998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.727 [2024-07-15 10:14:33.459331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:08.984 10:14:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.915 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.916 00:07:09.916 real 0m1.384s 00:07:09.916 user 0m1.233s 00:07:09.916 sys 0m0.152s 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.916 10:14:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.916 ************************************ 00:07:09.916 END TEST accel_dif_generate_copy 00:07:09.916 ************************************ 00:07:09.916 10:14:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.916 10:14:34 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:09.916 10:14:34 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:09.916 10:14:34 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:09.916 10:14:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.916 10:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.173 ************************************ 00:07:10.173 START TEST accel_comp 00:07:10.173 ************************************ 00:07:10.173 10:14:34 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:10.173 10:14:34 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:10.173 10:14:34 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:10.173 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.173 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.173 10:14:34 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:10.174 10:14:34 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:10.174 [2024-07-15 10:14:34.771530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:10.174 [2024-07-15 10:14:34.771586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715404 ] 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:10.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:10.174 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:10.174 [2024-07-15 10:14:34.861145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.174 [2024-07-15 10:14:34.929691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.431 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:34 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.432 10:14:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:11.363 10:14:36 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.363 00:07:11.363 real 0m1.388s 00:07:11.363 user 0m1.240s 00:07:11.363 sys 0m0.154s 00:07:11.363 10:14:36 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.363 10:14:36 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:11.363 ************************************ 00:07:11.363 END TEST accel_comp 00:07:11.363 ************************************ 00:07:11.621 10:14:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.621 10:14:36 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.621 10:14:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.621 10:14:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.621 10:14:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.621 ************************************ 00:07:11.621 START TEST accel_decomp 00:07:11.621 ************************************ 00:07:11.621 10:14:36 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.621 10:14:36 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:11.621 [2024-07-15 10:14:36.241688] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.621 [2024-07-15 10:14:36.241751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715687 ] 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:11.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.621 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:11.621 [2024-07-15 10:14:36.332413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.621 [2024-07-15 10:14:36.399937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.879 10:14:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.812 10:14:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.812 00:07:12.812 real 0m1.387s 00:07:12.812 user 0m1.237s 00:07:12.812 sys 0m0.155s 00:07:12.812 10:14:37 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.812 10:14:37 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:12.812 ************************************ 00:07:12.812 END TEST accel_decomp 00:07:12.812 ************************************ 00:07:13.070 10:14:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.070 10:14:37 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.070 10:14:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:13.070 10:14:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.070 10:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.070 ************************************ 00:07:13.070 START TEST accel_decomp_full 00:07:13.070 ************************************ 00:07:13.070 10:14:37 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:13.070 10:14:37 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:13.070 [2024-07-15 10:14:37.707774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:13.070 [2024-07-15 10:14:37.707843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715969 ] 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.070 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:13.070 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.071 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:13.071 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:13.071 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:13.071 [2024-07-15 10:14:37.798454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.329 [2024-07-15 10:14:37.867742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:13.329 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.330 10:14:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.703 10:14:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.703 00:07:14.703 real 0m1.394s 00:07:14.703 user 0m1.245s 00:07:14.703 sys 0m0.154s 00:07:14.703 10:14:39 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.703 10:14:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:14.703 ************************************ 00:07:14.703 END TEST accel_decomp_full 00:07:14.703 ************************************ 00:07:14.703 10:14:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.703 10:14:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.703 10:14:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:14.703 10:14:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.703 10:14:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.703 ************************************ 00:07:14.703 START TEST accel_decomp_mcore 00:07:14.703 ************************************ 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:14.703 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:14.703 [2024-07-15 10:14:39.155832] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.703 [2024-07-15 10:14:39.155871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716227 ] 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.703 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:14.703 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:14.704 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:14.704 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:14.704 [2024-07-15 10:14:39.246334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.704 [2024-07-15 10:14:39.319291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.704 [2024-07-15 10:14:39.319389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.704 [2024-07-15 10:14:39.319451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.704 [2024-07-15 10:14:39.319453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.704 10:14:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.080 00:07:16.080 real 0m1.383s 00:07:16.080 user 0m4.601s 00:07:16.080 sys 0m0.146s 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.080 10:14:40 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:16.080 ************************************ 00:07:16.081 END TEST accel_decomp_mcore 00:07:16.081 ************************************ 00:07:16.081 10:14:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.081 10:14:40 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.081 10:14:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:16.081 10:14:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.081 10:14:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.081 ************************************ 00:07:16.081 START TEST accel_decomp_full_mcore 00:07:16.081 ************************************ 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:16.081 [2024-07-15 10:14:40.630168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:16.081 [2024-07-15 10:14:40.630220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716465 ] 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:16.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:16.081 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:16.081 [2024-07-15 10:14:40.721334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.081 [2024-07-15 10:14:40.794174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.081 [2024-07-15 10:14:40.794270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.081 [2024-07-15 10:14:40.794356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.081 [2024-07-15 10:14:40.794358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.081 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.082 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.339 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.340 10:14:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.274 00:07:17.274 real 0m1.414s 00:07:17.274 user 0m4.655s 00:07:17.274 sys 0m0.166s 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.274 10:14:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:17.274 ************************************ 00:07:17.274 END TEST accel_decomp_full_mcore 00:07:17.274 ************************************ 00:07:17.274 10:14:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.274 10:14:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.274 10:14:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:17.274 10:14:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.274 10:14:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.532 ************************************ 00:07:17.532 START TEST accel_decomp_mthread 00:07:17.532 ************************************ 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:17.532 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:17.532 [2024-07-15 10:14:42.124276] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.532 [2024-07-15 10:14:42.124324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716717 ] 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:17.532 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:17.532 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:17.532 [2024-07-15 10:14:42.216698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.532 [2024-07-15 10:14:42.286303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.790 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.791 10:14:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.725 00:07:18.725 real 0m1.400s 00:07:18.725 user 0m1.248s 00:07:18.725 sys 0m0.155s 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.725 10:14:43 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:18.725 ************************************ 00:07:18.725 END TEST accel_decomp_mthread 00:07:18.725 ************************************ 00:07:18.983 10:14:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.983 10:14:43 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.983 10:14:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:18.983 10:14:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.983 10:14:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.983 ************************************ 00:07:18.983 START TEST accel_decomp_full_mthread 00:07:18.983 ************************************ 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:18.983 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:18.983 [2024-07-15 10:14:43.590469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:18.983 [2024-07-15 10:14:43.590515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716958 ] 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:18.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:18.983 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:18.983 [2024-07-15 10:14:43.681720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.983 [2024-07-15 10:14:43.751573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.242 10:14:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.176 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.434 00:07:20.434 real 0m1.412s 00:07:20.434 user 0m1.266s 00:07:20.434 sys 0m0.152s 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.434 10:14:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 ************************************ 00:07:20.434 END TEST accel_decomp_full_mthread 00:07:20.434 ************************************ 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.434 10:14:45 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:07:20.434 10:14:45 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:07:20.434 10:14:45 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:07:20.434 10:14:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.434 10:14:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1717188 00:07:20.434 10:14:45 accel -- accel/accel.sh@63 -- # waitforlisten 1717188 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@829 -- # '[' -z 1717188 ']' 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.434 10:14:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.434 10:14:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.434 10:14:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.434 10:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.434 10:14:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.434 10:14:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.434 10:14:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.434 10:14:45 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:20.434 10:14:45 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:20.434 10:14:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:20.434 10:14:45 accel -- accel/accel.sh@41 -- # jq -r . 00:07:20.434 [2024-07-15 10:14:45.069278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:20.434 [2024-07-15 10:14:45.069329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717188 ] 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:20.434 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.434 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:20.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:20.435 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:20.435 [2024-07-15 10:14:45.160636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.693 [2024-07-15 10:14:45.235679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.951 [2024-07-15 10:14:45.726710] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:21.210 10:14:45 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.210 10:14:45 accel -- common/autotest_common.sh@862 -- # return 0 00:07:21.210 10:14:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:21.210 10:14:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:21.210 10:14:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:21.210 10:14:45 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:07:21.210 10:14:45 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:07:21.210 10:14:45 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:07:21.210 10:14:45 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:21.210 10:14:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.210 10:14:45 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:21.210 10:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.497 "method": "compressdev_scan_accel_module", 00:07:21.497 10:14:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:21.497 10:14:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.497 10:14:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.497 10:14:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.497 10:14:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.497 10:14:46 accel -- accel/accel.sh@75 -- # killprocess 1717188 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@948 -- # '[' -z 1717188 ']' 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@952 -- # kill -0 1717188 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@953 -- # uname 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1717188 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1717188' 00:07:21.497 killing process with pid 1717188 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@967 -- # kill 1717188 00:07:21.497 10:14:46 accel -- common/autotest_common.sh@972 -- # wait 1717188 00:07:21.758 10:14:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:21.759 10:14:46 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.759 10:14:46 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:21.759 10:14:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.759 10:14:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.759 ************************************ 00:07:21.759 START TEST accel_cdev_comp 00:07:21.759 ************************************ 00:07:21.759 10:14:46 accel.accel_cdev_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:21.759 10:14:46 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:07:21.759 [2024-07-15 10:14:46.528192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.759 [2024-07-15 10:14:46.528248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717443 ] 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:22.017 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:22.017 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:22.017 [2024-07-15 10:14:46.617471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.017 [2024-07-15 10:14:46.686715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.585 [2024-07-15 10:14:47.179226] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:22.585 [2024-07-15 10:14:47.181094] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d2dfe0 PMD being used: compress_qat 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.585 [2024-07-15 10:14:47.184629] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f32d30 PMD being used: compress_qat 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.585 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:22.586 10:14:47 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.961 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:23.962 10:14:48 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:23.962 00:07:23.962 real 0m1.831s 00:07:23.962 user 0m1.442s 00:07:23.962 sys 0m0.394s 00:07:23.962 10:14:48 accel.accel_cdev_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.962 10:14:48 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:07:23.962 ************************************ 00:07:23.962 END TEST accel_cdev_comp 00:07:23.962 ************************************ 00:07:23.962 10:14:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.962 10:14:48 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.962 10:14:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:23.962 10:14:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.962 10:14:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.962 ************************************ 00:07:23.962 START TEST accel_cdev_decomp 00:07:23.962 ************************************ 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:23.962 10:14:48 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:23.962 [2024-07-15 10:14:48.427512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:23.962 [2024-07-15 10:14:48.427572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717878 ] 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:23.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:23.962 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:23.962 [2024-07-15 10:14:48.517959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.962 [2024-07-15 10:14:48.588677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.554 [2024-07-15 10:14:49.078433] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:24.554 [2024-07-15 10:14:49.080231] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25affe0 PMD being used: compress_qat 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 [2024-07-15 10:14:49.083606] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x27b4d30 PMD being used: compress_qat 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:24.554 10:14:49 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:25.484 00:07:25.484 real 0m1.833s 00:07:25.484 user 0m1.456s 00:07:25.484 sys 0m0.382s 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.484 10:14:50 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:25.484 ************************************ 00:07:25.484 END TEST accel_cdev_decomp 00:07:25.484 ************************************ 00:07:25.484 10:14:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.484 10:14:50 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.484 10:14:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.484 10:14:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.484 10:14:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.743 ************************************ 00:07:25.743 START TEST accel_cdev_decomp_full 00:07:25.743 ************************************ 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:25.743 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:25.743 [2024-07-15 10:14:50.322358] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.743 [2024-07-15 10:14:50.322405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718238 ] 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:25.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:25.743 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:25.743 [2024-07-15 10:14:50.407837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.743 [2024-07-15 10:14:50.477792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.310 [2024-07-15 10:14:50.966258] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:26.310 [2024-07-15 10:14:50.968155] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2793fe0 PMD being used: compress_qat 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 [2024-07-15 10:14:50.970688] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x27972b0 PMD being used: compress_qat 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.310 10:14:50 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:27.682 00:07:27.682 real 0m1.810s 00:07:27.682 user 0m1.412s 00:07:27.682 sys 0m0.398s 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.682 10:14:52 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:27.682 ************************************ 00:07:27.682 END TEST accel_cdev_decomp_full 00:07:27.682 ************************************ 00:07:27.682 10:14:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.682 10:14:52 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.682 10:14:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:27.682 10:14:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.682 10:14:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.682 ************************************ 00:07:27.682 START TEST accel_cdev_decomp_mcore 00:07:27.682 ************************************ 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:27.682 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:27.682 [2024-07-15 10:14:52.205149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.682 [2024-07-15 10:14:52.205189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718528 ] 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:27.682 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.682 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:27.682 [2024-07-15 10:14:52.292913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.682 [2024-07-15 10:14:52.364968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.682 [2024-07-15 10:14:52.365062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.682 [2024-07-15 10:14:52.365148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.682 [2024-07-15 10:14:52.365150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.248 [2024-07-15 10:14:52.880942] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:28.248 [2024-07-15 10:14:52.882809] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xbf0600 PMD being used: compress_qat 00:07:28.248 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.248 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.248 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.248 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 [2024-07-15 10:14:52.887345] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc4e419b8b0 PMD being used: compress_qat 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 [2024-07-15 10:14:52.888419] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc4dc19b8b0 PMD being used: compress_qat 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:28.249 [2024-07-15 10:14:52.888782] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xbf5890 PMD being used: compress_qat 00:07:28.249 [2024-07-15 10:14:52.888905] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc4d419b8b0 PMD being used: compress_qat 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.249 10:14:52 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:29.624 00:07:29.624 real 0m1.859s 00:07:29.624 user 0m6.240s 00:07:29.624 sys 0m0.418s 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.624 10:14:54 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:29.624 ************************************ 00:07:29.624 END TEST accel_cdev_decomp_mcore 00:07:29.624 ************************************ 00:07:29.624 10:14:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.624 10:14:54 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.624 10:14:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:29.624 10:14:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.624 10:14:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.624 ************************************ 00:07:29.624 START TEST accel_cdev_decomp_full_mcore 00:07:29.624 ************************************ 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.624 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.624 [2024-07-15 10:14:54.144364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:29.624 [2024-07-15 10:14:54.144421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718819 ] 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:29.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:29.624 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:29.625 [2024-07-15 10:14:54.232319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.625 [2024-07-15 10:14:54.304919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.625 [2024-07-15 10:14:54.304977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.625 [2024-07-15 10:14:54.305062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.625 [2024-07-15 10:14:54.305065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.192 [2024-07-15 10:14:54.818872] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:30.192 [2024-07-15 10:14:54.820762] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x104a600 PMD being used: compress_qat 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 [2024-07-15 10:14:54.824415] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc1bc19b8b0 PMD being used: compress_qat 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 [2024-07-15 10:14:54.825470] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc1b419b8b0 PMD being used: compress_qat 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:30.192 [2024-07-15 10:14:54.825866] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x104a6a0 PMD being used: compress_qat 00:07:30.192 [2024-07-15 10:14:54.825960] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fc1ac19b8b0 PMD being used: compress_qat 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.192 10:14:54 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:31.567 00:07:31.567 real 0m1.870s 00:07:31.567 user 0m6.246s 00:07:31.567 sys 0m0.409s 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.567 10:14:55 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:31.567 ************************************ 00:07:31.567 END TEST accel_cdev_decomp_full_mcore 00:07:31.567 ************************************ 00:07:31.567 10:14:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.567 10:14:56 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.567 10:14:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:31.567 10:14:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.567 10:14:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.567 ************************************ 00:07:31.567 START TEST accel_cdev_decomp_mthread 00:07:31.567 ************************************ 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:31.567 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:31.567 [2024-07-15 10:14:56.065011] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:31.567 [2024-07-15 10:14:56.065051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719203 ] 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:31.567 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.567 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:31.568 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:31.568 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:31.568 [2024-07-15 10:14:56.153052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.568 [2024-07-15 10:14:56.222832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.135 [2024-07-15 10:14:56.708603] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:32.135 [2024-07-15 10:14:56.710464] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x18d0fe0 PMD being used: compress_qat 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.135 [2024-07-15 10:14:56.714470] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x18d6180 PMD being used: compress_qat 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.135 [2024-07-15 10:14:56.716144] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x19f8b20 PMD being used: compress_qat 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.135 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.136 10:14:56 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.070 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:33.329 00:07:33.329 real 0m1.811s 00:07:33.329 user 0m1.463s 00:07:33.329 sys 0m0.352s 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.329 10:14:57 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:33.329 ************************************ 00:07:33.329 END TEST accel_cdev_decomp_mthread 00:07:33.329 ************************************ 00:07:33.329 10:14:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.329 10:14:57 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.329 10:14:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:33.329 10:14:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.329 10:14:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.329 ************************************ 00:07:33.329 START TEST accel_cdev_decomp_full_mthread 00:07:33.329 ************************************ 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:33.329 10:14:57 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:33.329 [2024-07-15 10:14:57.966125] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:33.329 [2024-07-15 10:14:57.966177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719644 ] 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.329 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:33.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:33.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:33.330 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:33.330 [2024-07-15 10:14:58.054452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.588 [2024-07-15 10:14:58.124344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.847 [2024-07-15 10:14:58.618326] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:33.847 [2024-07-15 10:14:58.620192] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1678fe0 PMD being used: compress_qat 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.847 [2024-07-15 10:14:58.623386] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1679080 PMD being used: compress_qat 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 [2024-07-15 10:14:58.625113] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x187dc10 PMD being used: compress_qat 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.847 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.848 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.106 10:14:58 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:35.042 00:07:35.042 real 0m1.830s 00:07:35.042 user 0m1.449s 00:07:35.042 sys 0m0.386s 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.042 10:14:59 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 ************************************ 00:07:35.042 END TEST accel_cdev_decomp_full_mthread 00:07:35.042 ************************************ 00:07:35.042 10:14:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.042 10:14:59 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:07:35.042 10:14:59 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:35.042 10:14:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.042 10:14:59 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.042 10:14:59 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.042 10:14:59 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.042 10:14:59 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.042 10:14:59 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.042 10:14:59 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.042 10:14:59 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:35.042 10:14:59 accel -- accel/accel.sh@41 -- # jq -r . 00:07:35.042 10:14:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.042 10:14:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.301 ************************************ 00:07:35.301 START TEST accel_dif_functional_tests 00:07:35.301 ************************************ 00:07:35.301 10:14:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.301 [2024-07-15 10:14:59.896650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.301 [2024-07-15 10:14:59.896689] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719934 ] 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:35.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.301 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:35.301 [2024-07-15 10:14:59.984425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.301 [2024-07-15 10:15:00.066145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.301 [2024-07-15 10:15:00.066238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.301 [2024-07-15 10:15:00.066241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.560 00:07:35.560 00:07:35.560 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.560 http://cunit.sourceforge.net/ 00:07:35.560 00:07:35.560 00:07:35.560 Suite: accel_dif 00:07:35.560 Test: verify: DIF generated, GUARD check ...passed 00:07:35.560 Test: verify: DIF generated, APPTAG check ...passed 00:07:35.560 Test: verify: DIF generated, REFTAG check ...passed 00:07:35.560 Test: verify: DIF not generated, GUARD check ...[2024-07-15 10:15:00.147376] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.560 passed 00:07:35.560 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 10:15:00.147433] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.560 passed 00:07:35.560 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 10:15:00.147471] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.560 passed 00:07:35.560 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:35.560 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 10:15:00.147519] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:35.560 passed 00:07:35.560 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:35.560 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:35.560 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:35.560 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 10:15:00.147622] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:35.560 passed 00:07:35.560 Test: verify copy: DIF generated, GUARD check ...passed 00:07:35.560 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:35.560 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:35.560 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 10:15:00.147733] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.560 passed 00:07:35.560 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 10:15:00.147760] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.560 passed 00:07:35.560 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 10:15:00.147788] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.560 passed 00:07:35.560 Test: generate copy: DIF generated, GUARD check ...passed 00:07:35.560 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:35.560 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:35.560 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:35.560 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:35.560 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:35.560 Test: generate copy: iovecs-len validate ...[2024-07-15 10:15:00.147963] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:35.560 passed 00:07:35.560 Test: generate copy: buffer alignment validate ...passed 00:07:35.560 00:07:35.560 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.560 suites 1 1 n/a 0 0 00:07:35.560 tests 26 26 26 0 0 00:07:35.560 asserts 115 115 115 0 n/a 00:07:35.560 00:07:35.560 Elapsed time = 0.002 seconds 00:07:35.560 00:07:35.560 real 0m0.468s 00:07:35.560 user 0m0.617s 00:07:35.560 sys 0m0.179s 00:07:35.560 10:15:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.560 10:15:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:35.560 ************************************ 00:07:35.560 END TEST accel_dif_functional_tests 00:07:35.560 ************************************ 00:07:35.819 10:15:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.819 00:07:35.819 real 0m47.158s 00:07:35.819 user 0m56.142s 00:07:35.819 sys 0m9.108s 00:07:35.819 10:15:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.819 10:15:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.819 ************************************ 00:07:35.819 END TEST accel 00:07:35.819 ************************************ 00:07:35.819 10:15:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.819 10:15:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.819 10:15:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.819 10:15:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.819 10:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:35.819 ************************************ 00:07:35.819 START TEST accel_rpc 00:07:35.819 ************************************ 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.819 * Looking for test storage... 00:07:35.819 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:35.819 10:15:00 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.819 10:15:00 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1720046 00:07:35.819 10:15:00 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1720046 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1720046 ']' 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.819 10:15:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.819 10:15:00 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:35.819 [2024-07-15 10:15:00.591633] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.819 [2024-07-15 10:15:00.591691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720046 ] 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:36.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:36.090 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:36.090 [2024-07-15 10:15:00.685135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.090 [2024-07-15 10:15:00.757770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.679 10:15:01 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.679 10:15:01 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:36.679 10:15:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:36.679 10:15:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:36.679 10:15:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:36.679 10:15:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:36.679 10:15:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:36.679 10:15:01 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.679 10:15:01 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.679 10:15:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.679 ************************************ 00:07:36.679 START TEST accel_assign_opcode 00:07:36.679 ************************************ 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.679 [2024-07-15 10:15:01.395716] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.679 [2024-07-15 10:15:01.403732] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.679 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.938 software 00:07:36.938 00:07:36.938 real 0m0.244s 00:07:36.938 user 0m0.041s 00:07:36.938 sys 0m0.013s 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.938 10:15:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.938 ************************************ 00:07:36.938 END TEST accel_assign_opcode 00:07:36.938 ************************************ 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:36.938 10:15:01 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1720046 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1720046 ']' 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1720046 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1720046 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.938 10:15:01 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.196 10:15:01 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1720046' 00:07:37.196 killing process with pid 1720046 00:07:37.196 10:15:01 accel_rpc -- common/autotest_common.sh@967 -- # kill 1720046 00:07:37.196 10:15:01 accel_rpc -- common/autotest_common.sh@972 -- # wait 1720046 00:07:37.459 00:07:37.459 real 0m1.596s 00:07:37.459 user 0m1.567s 00:07:37.459 sys 0m0.514s 00:07:37.459 10:15:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.459 10:15:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.459 ************************************ 00:07:37.459 END TEST accel_rpc 00:07:37.459 ************************************ 00:07:37.459 10:15:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:37.459 10:15:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.459 10:15:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.459 10:15:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.459 10:15:02 -- common/autotest_common.sh@10 -- # set +x 00:07:37.459 ************************************ 00:07:37.459 START TEST app_cmdline 00:07:37.459 ************************************ 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.459 * Looking for test storage... 00:07:37.459 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:37.459 10:15:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.459 10:15:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1720515 00:07:37.459 10:15:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.459 10:15:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1720515 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1720515 ']' 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.459 10:15:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.717 [2024-07-15 10:15:02.262293] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.717 [2024-07-15 10:15:02.262349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720515 ] 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.717 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:37.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:37.718 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:37.718 [2024-07-15 10:15:02.354643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.718 [2024-07-15 10:15:02.428670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.284 10:15:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.284 10:15:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:38.284 10:15:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:38.543 { 00:07:38.543 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:38.543 "fields": { 00:07:38.543 "major": 24, 00:07:38.543 "minor": 9, 00:07:38.543 "patch": 0, 00:07:38.543 "suffix": "-pre", 00:07:38.543 "commit": "719d03c6a" 00:07:38.543 } 00:07:38.543 } 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.543 10:15:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.543 10:15:03 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.801 request: 00:07:38.801 { 00:07:38.801 "method": "env_dpdk_get_mem_stats", 00:07:38.801 "req_id": 1 00:07:38.801 } 00:07:38.801 Got JSON-RPC error response 00:07:38.801 response: 00:07:38.801 { 00:07:38.801 "code": -32601, 00:07:38.801 "message": "Method not found" 00:07:38.801 } 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:38.801 10:15:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1720515 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1720515 ']' 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1720515 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1720515 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1720515' 00:07:38.801 killing process with pid 1720515 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 1720515 00:07:38.801 10:15:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 1720515 00:07:39.058 00:07:39.059 real 0m1.681s 00:07:39.059 user 0m1.899s 00:07:39.059 sys 0m0.516s 00:07:39.059 10:15:03 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.059 10:15:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.059 ************************************ 00:07:39.059 END TEST app_cmdline 00:07:39.059 ************************************ 00:07:39.059 10:15:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.059 10:15:03 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:39.059 10:15:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.059 10:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.059 10:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:39.316 ************************************ 00:07:39.316 START TEST version 00:07:39.316 ************************************ 00:07:39.316 10:15:03 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:39.316 * Looking for test storage... 00:07:39.316 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:39.316 10:15:03 version -- app/version.sh@17 -- # get_header_version major 00:07:39.316 10:15:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # cut -f2 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.316 10:15:03 version -- app/version.sh@17 -- # major=24 00:07:39.316 10:15:03 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.316 10:15:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # cut -f2 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.316 10:15:03 version -- app/version.sh@18 -- # minor=9 00:07:39.316 10:15:03 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.316 10:15:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # cut -f2 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.316 10:15:03 version -- app/version.sh@19 -- # patch=0 00:07:39.316 10:15:03 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.316 10:15:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # cut -f2 00:07:39.316 10:15:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.316 10:15:03 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.316 10:15:03 version -- app/version.sh@22 -- # version=24.9 00:07:39.316 10:15:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.316 10:15:03 version -- app/version.sh@28 -- # version=24.9rc0 00:07:39.316 10:15:03 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:39.316 10:15:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.316 10:15:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:39.316 10:15:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:39.316 00:07:39.316 real 0m0.180s 00:07:39.316 user 0m0.095s 00:07:39.316 sys 0m0.132s 00:07:39.316 10:15:04 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.316 10:15:04 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.316 ************************************ 00:07:39.316 END TEST version 00:07:39.316 ************************************ 00:07:39.316 10:15:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.316 10:15:04 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:07:39.316 10:15:04 -- spdk/autotest.sh@189 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:39.316 10:15:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.316 10:15:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.316 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:07:39.575 ************************************ 00:07:39.575 START TEST blockdev_general 00:07:39.575 ************************************ 00:07:39.575 10:15:04 blockdev_general -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:39.575 * Looking for test storage... 00:07:39.575 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:39.575 10:15:04 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1721275 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:39.575 10:15:04 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 1721275 00:07:39.575 10:15:04 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 1721275 ']' 00:07:39.575 10:15:04 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.575 10:15:04 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.576 10:15:04 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.576 10:15:04 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.576 10:15:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:39.576 [2024-07-15 10:15:04.293351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.576 [2024-07-15 10:15:04.293403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721275 ] 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:39.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.576 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:39.834 [2024-07-15 10:15:04.387415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.834 [2024-07-15 10:15:04.457558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.398 10:15:05 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.398 10:15:05 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:07:40.398 10:15:05 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:40.398 10:15:05 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:07:40.398 10:15:05 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:07:40.398 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.398 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:40.654 [2024-07-15 10:15:05.311286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:40.654 [2024-07-15 10:15:05.311326] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:40.654 00:07:40.654 [2024-07-15 10:15:05.319282] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:40.655 [2024-07-15 10:15:05.319299] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:40.655 00:07:40.655 Malloc0 00:07:40.655 Malloc1 00:07:40.655 Malloc2 00:07:40.655 Malloc3 00:07:40.655 Malloc4 00:07:40.655 Malloc5 00:07:40.655 Malloc6 00:07:40.655 Malloc7 00:07:40.655 Malloc8 00:07:40.655 Malloc9 00:07:40.911 [2024-07-15 10:15:05.444531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:40.911 [2024-07-15 10:15:05.444570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.911 [2024-07-15 10:15:05.444585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa45850 00:07:40.911 [2024-07-15 10:15:05.444593] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.911 [2024-07-15 10:15:05.445517] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.911 [2024-07-15 10:15:05.445541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:40.911 TestPT 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:40.911 5000+0 records in 00:07:40.911 5000+0 records out 00:07:40.911 10240000 bytes (10 MB, 9.8 MiB) copied, 0.015416 s, 664 MB/s 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 AIO0 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:40.911 10:15:05 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.911 10:15:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:41.170 10:15:05 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.170 10:15:05 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:41.170 10:15:05 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:41.171 10:15:05 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ddd6504a-87fc-47aa-afd4-b9545d6edec4"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ddd6504a-87fc-47aa-afd4-b9545d6edec4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "32620a6e-f405-586b-ac66-ad5b847482ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "32620a6e-f405-586b-ac66-ad5b847482ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a638cb23-cd6f-55d2-a0c5-188394f36c8d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a638cb23-cd6f-55d2-a0c5-188394f36c8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bc997c5f-d494-5035-b4d8-985641695eae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc997c5f-d494-5035-b4d8-985641695eae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cb87c7dd-b7cd-55f7-82c2-fa887bcbfa96"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cb87c7dd-b7cd-55f7-82c2-fa887bcbfa96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "24d592a6-f6a0-55b8-ba9d-99043da6e2f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "24d592a6-f6a0-55b8-ba9d-99043da6e2f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ae53d14d-8b51-5a44-926e-d2d6d440e1be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae53d14d-8b51-5a44-926e-d2d6d440e1be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "fac7d6b0-7ec7-5ce4-8314-d07b94dd3e31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fac7d6b0-7ec7-5ce4-8314-d07b94dd3e31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "7de92a74-e1fd-5c97-add6-2e1f2e8329ac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7de92a74-e1fd-5c97-add6-2e1f2e8329ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e34bfb11-0013-58d1-8274-f39796870cf9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e34bfb11-0013-58d1-8274-f39796870cf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "3bf904c4-28e1-5cef-aef5-ba9607b5fa9b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3bf904c4-28e1-5cef-aef5-ba9607b5fa9b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "99d9178b-a8af-5939-9239-1ab5bff29dc2"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "99d9178b-a8af-5939-9239-1ab5bff29dc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f993a620-5c76-42b3-83a3-6352c43f0d6d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f993a620-5c76-42b3-83a3-6352c43f0d6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f993a620-5c76-42b3-83a3-6352c43f0d6d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "f394658b-651e-43ba-b2c1-a0c22ec38d66",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9459c162-285d-4996-9652-7d04c190b8cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "687c9fe0-5942-4f4b-9a57-331031b2bc77"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "687c9fe0-5942-4f4b-9a57-331031b2bc77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "687c9fe0-5942-4f4b-9a57-331031b2bc77",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "64b143c0-f9a5-44bd-8e69-3e4a834a6827",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "3a87068c-7fe3-4210-bde9-1e76ab355f26",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "13054c09-cc8c-42c7-9f60-30aef3cc41b6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "13054c09-cc8c-42c7-9f60-30aef3cc41b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "13054c09-cc8c-42c7-9f60-30aef3cc41b6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b6ea9623-d892-4268-8caa-f9e9a6e6fd78",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a5186dfe-c67c-417c-8fde-4054a9622849",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0bf72cda-f0a7-4fed-9447-e546ac5e82ea"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0bf72cda-f0a7-4fed-9447-e546ac5e82ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:07:41.171 10:15:05 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:41.171 10:15:05 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:07:41.171 10:15:05 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:41.171 10:15:05 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 1721275 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 1721275 ']' 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 1721275 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1721275 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1721275' 00:07:41.171 killing process with pid 1721275 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@967 -- # kill 1721275 00:07:41.171 10:15:05 blockdev_general -- common/autotest_common.sh@972 -- # wait 1721275 00:07:41.736 10:15:06 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:41.736 10:15:06 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:41.736 10:15:06 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:41.736 10:15:06 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.736 10:15:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:41.736 ************************************ 00:07:41.736 START TEST bdev_hello_world 00:07:41.736 ************************************ 00:07:41.736 10:15:06 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:41.736 [2024-07-15 10:15:06.335805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.736 [2024-07-15 10:15:06.335848] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1721813 ] 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.736 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:41.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:41.737 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.737 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:41.737 [2024-07-15 10:15:06.425574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.737 [2024-07-15 10:15:06.495594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.995 [2024-07-15 10:15:06.637371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:41.995 [2024-07-15 10:15:06.637419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:41.995 [2024-07-15 10:15:06.637444] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:41.995 [2024-07-15 10:15:06.645381] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:41.995 [2024-07-15 10:15:06.645399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:41.995 [2024-07-15 10:15:06.653394] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:41.995 [2024-07-15 10:15:06.653412] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:41.995 [2024-07-15 10:15:06.720265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:41.995 [2024-07-15 10:15:06.720305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.995 [2024-07-15 10:15:06.720316] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1202950 00:07:41.995 [2024-07-15 10:15:06.720340] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.995 [2024-07-15 10:15:06.721412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.995 [2024-07-15 10:15:06.721435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:42.254 [2024-07-15 10:15:06.861074] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:42.254 [2024-07-15 10:15:06.861110] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:42.254 [2024-07-15 10:15:06.861134] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:42.254 [2024-07-15 10:15:06.861162] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:42.254 [2024-07-15 10:15:06.861192] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:42.254 [2024-07-15 10:15:06.861204] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:42.254 [2024-07-15 10:15:06.861229] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:42.254 00:07:42.254 [2024-07-15 10:15:06.861245] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:42.513 00:07:42.513 real 0m0.825s 00:07:42.513 user 0m0.528s 00:07:42.513 sys 0m0.251s 00:07:42.513 10:15:07 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.513 10:15:07 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:42.513 ************************************ 00:07:42.513 END TEST bdev_hello_world 00:07:42.513 ************************************ 00:07:42.513 10:15:07 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:42.513 10:15:07 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:42.513 10:15:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.513 10:15:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.513 10:15:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:42.513 ************************************ 00:07:42.513 START TEST bdev_bounds 00:07:42.513 ************************************ 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1722091 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1722091' 00:07:42.513 Process bdevio pid: 1722091 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1722091 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1722091 ']' 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.513 10:15:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:42.513 [2024-07-15 10:15:07.250129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.513 [2024-07-15 10:15:07.250177] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722091 ] 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.513 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:42.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:42.514 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:42.514 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:42.772 [2024-07-15 10:15:07.341321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.772 [2024-07-15 10:15:07.411198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.772 [2024-07-15 10:15:07.411295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.772 [2024-07-15 10:15:07.411297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.772 [2024-07-15 10:15:07.549558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:42.772 [2024-07-15 10:15:07.549606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:42.772 [2024-07-15 10:15:07.549616] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:42.772 [2024-07-15 10:15:07.557565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:42.772 [2024-07-15 10:15:07.557583] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:43.030 [2024-07-15 10:15:07.565581] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:43.030 [2024-07-15 10:15:07.565597] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:43.030 [2024-07-15 10:15:07.633542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:43.030 [2024-07-15 10:15:07.633584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.030 [2024-07-15 10:15:07.633596] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24e6e20 00:07:43.030 [2024-07-15 10:15:07.633619] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.030 [2024-07-15 10:15:07.634662] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.030 [2024-07-15 10:15:07.634688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:43.287 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.287 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:43.287 10:15:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:43.546 I/O targets: 00:07:43.546 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:43.546 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:43.546 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:43.546 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:43.546 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:43.546 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:43.546 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:43.546 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:43.546 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:43.546 00:07:43.546 00:07:43.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.546 http://cunit.sourceforge.net/ 00:07:43.546 00:07:43.546 00:07:43.546 Suite: bdevio tests on: AIO0 00:07:43.546 Test: blockdev write read block ...passed 00:07:43.546 Test: blockdev write zeroes read block ...passed 00:07:43.546 Test: blockdev write zeroes read no split ...passed 00:07:43.546 Test: blockdev write zeroes read split ...passed 00:07:43.546 Test: blockdev write zeroes read split partial ...passed 00:07:43.546 Test: blockdev reset ...passed 00:07:43.546 Test: blockdev write read 8 blocks ...passed 00:07:43.546 Test: blockdev write read size > 128k ...passed 00:07:43.546 Test: blockdev write read invalid size ...passed 00:07:43.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.546 Test: blockdev write read max offset ...passed 00:07:43.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.546 Test: blockdev writev readv 8 blocks ...passed 00:07:43.546 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.546 Test: blockdev writev readv block ...passed 00:07:43.546 Test: blockdev writev readv size > 128k ...passed 00:07:43.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.546 Test: blockdev comparev and writev ...passed 00:07:43.546 Test: blockdev nvme passthru rw ...passed 00:07:43.546 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.546 Test: blockdev nvme admin passthru ...passed 00:07:43.546 Test: blockdev copy ...passed 00:07:43.546 Suite: bdevio tests on: raid1 00:07:43.546 Test: blockdev write read block ...passed 00:07:43.546 Test: blockdev write zeroes read block ...passed 00:07:43.546 Test: blockdev write zeroes read no split ...passed 00:07:43.546 Test: blockdev write zeroes read split ...passed 00:07:43.546 Test: blockdev write zeroes read split partial ...passed 00:07:43.546 Test: blockdev reset ...passed 00:07:43.546 Test: blockdev write read 8 blocks ...passed 00:07:43.546 Test: blockdev write read size > 128k ...passed 00:07:43.546 Test: blockdev write read invalid size ...passed 00:07:43.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.546 Test: blockdev write read max offset ...passed 00:07:43.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.546 Test: blockdev writev readv 8 blocks ...passed 00:07:43.546 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.546 Test: blockdev writev readv block ...passed 00:07:43.546 Test: blockdev writev readv size > 128k ...passed 00:07:43.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.546 Test: blockdev comparev and writev ...passed 00:07:43.546 Test: blockdev nvme passthru rw ...passed 00:07:43.546 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.546 Test: blockdev nvme admin passthru ...passed 00:07:43.546 Test: blockdev copy ...passed 00:07:43.546 Suite: bdevio tests on: concat0 00:07:43.546 Test: blockdev write read block ...passed 00:07:43.546 Test: blockdev write zeroes read block ...passed 00:07:43.546 Test: blockdev write zeroes read no split ...passed 00:07:43.546 Test: blockdev write zeroes read split ...passed 00:07:43.546 Test: blockdev write zeroes read split partial ...passed 00:07:43.546 Test: blockdev reset ...passed 00:07:43.546 Test: blockdev write read 8 blocks ...passed 00:07:43.546 Test: blockdev write read size > 128k ...passed 00:07:43.546 Test: blockdev write read invalid size ...passed 00:07:43.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.546 Test: blockdev write read max offset ...passed 00:07:43.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.546 Test: blockdev writev readv 8 blocks ...passed 00:07:43.546 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.546 Test: blockdev writev readv block ...passed 00:07:43.546 Test: blockdev writev readv size > 128k ...passed 00:07:43.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.546 Test: blockdev comparev and writev ...passed 00:07:43.546 Test: blockdev nvme passthru rw ...passed 00:07:43.546 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.546 Test: blockdev nvme admin passthru ...passed 00:07:43.546 Test: blockdev copy ...passed 00:07:43.546 Suite: bdevio tests on: raid0 00:07:43.546 Test: blockdev write read block ...passed 00:07:43.546 Test: blockdev write zeroes read block ...passed 00:07:43.546 Test: blockdev write zeroes read no split ...passed 00:07:43.546 Test: blockdev write zeroes read split ...passed 00:07:43.546 Test: blockdev write zeroes read split partial ...passed 00:07:43.546 Test: blockdev reset ...passed 00:07:43.546 Test: blockdev write read 8 blocks ...passed 00:07:43.546 Test: blockdev write read size > 128k ...passed 00:07:43.546 Test: blockdev write read invalid size ...passed 00:07:43.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.546 Test: blockdev write read max offset ...passed 00:07:43.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.546 Test: blockdev writev readv 8 blocks ...passed 00:07:43.546 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.546 Test: blockdev writev readv block ...passed 00:07:43.546 Test: blockdev writev readv size > 128k ...passed 00:07:43.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.546 Test: blockdev comparev and writev ...passed 00:07:43.546 Test: blockdev nvme passthru rw ...passed 00:07:43.546 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.546 Test: blockdev nvme admin passthru ...passed 00:07:43.546 Test: blockdev copy ...passed 00:07:43.546 Suite: bdevio tests on: TestPT 00:07:43.546 Test: blockdev write read block ...passed 00:07:43.546 Test: blockdev write zeroes read block ...passed 00:07:43.546 Test: blockdev write zeroes read no split ...passed 00:07:43.546 Test: blockdev write zeroes read split ...passed 00:07:43.546 Test: blockdev write zeroes read split partial ...passed 00:07:43.546 Test: blockdev reset ...passed 00:07:43.546 Test: blockdev write read 8 blocks ...passed 00:07:43.546 Test: blockdev write read size > 128k ...passed 00:07:43.546 Test: blockdev write read invalid size ...passed 00:07:43.546 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.546 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.546 Test: blockdev write read max offset ...passed 00:07:43.546 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.546 Test: blockdev writev readv 8 blocks ...passed 00:07:43.546 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.546 Test: blockdev writev readv block ...passed 00:07:43.546 Test: blockdev writev readv size > 128k ...passed 00:07:43.546 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.546 Test: blockdev comparev and writev ...passed 00:07:43.546 Test: blockdev nvme passthru rw ...passed 00:07:43.546 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.546 Test: blockdev nvme admin passthru ...passed 00:07:43.546 Test: blockdev copy ...passed 00:07:43.546 Suite: bdevio tests on: Malloc2p7 00:07:43.546 Test: blockdev write read block ...passed 00:07:43.546 Test: blockdev write zeroes read block ...passed 00:07:43.546 Test: blockdev write zeroes read no split ...passed 00:07:43.546 Test: blockdev write zeroes read split ...passed 00:07:43.546 Test: blockdev write zeroes read split partial ...passed 00:07:43.546 Test: blockdev reset ...passed 00:07:43.546 Test: blockdev write read 8 blocks ...passed 00:07:43.546 Test: blockdev write read size > 128k ...passed 00:07:43.546 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p6 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p5 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p4 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p3 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p2 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p1 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc2p0 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.547 Test: blockdev reset ...passed 00:07:43.547 Test: blockdev write read 8 blocks ...passed 00:07:43.547 Test: blockdev write read size > 128k ...passed 00:07:43.547 Test: blockdev write read invalid size ...passed 00:07:43.547 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.547 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.547 Test: blockdev write read max offset ...passed 00:07:43.547 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.547 Test: blockdev writev readv 8 blocks ...passed 00:07:43.547 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.547 Test: blockdev writev readv block ...passed 00:07:43.547 Test: blockdev writev readv size > 128k ...passed 00:07:43.547 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.547 Test: blockdev comparev and writev ...passed 00:07:43.547 Test: blockdev nvme passthru rw ...passed 00:07:43.547 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.547 Test: blockdev nvme admin passthru ...passed 00:07:43.547 Test: blockdev copy ...passed 00:07:43.547 Suite: bdevio tests on: Malloc1p1 00:07:43.547 Test: blockdev write read block ...passed 00:07:43.547 Test: blockdev write zeroes read block ...passed 00:07:43.547 Test: blockdev write zeroes read no split ...passed 00:07:43.547 Test: blockdev write zeroes read split ...passed 00:07:43.547 Test: blockdev write zeroes read split partial ...passed 00:07:43.548 Test: blockdev reset ...passed 00:07:43.548 Test: blockdev write read 8 blocks ...passed 00:07:43.548 Test: blockdev write read size > 128k ...passed 00:07:43.548 Test: blockdev write read invalid size ...passed 00:07:43.548 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.548 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.548 Test: blockdev write read max offset ...passed 00:07:43.548 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.548 Test: blockdev writev readv 8 blocks ...passed 00:07:43.548 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.548 Test: blockdev writev readv block ...passed 00:07:43.548 Test: blockdev writev readv size > 128k ...passed 00:07:43.548 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.548 Test: blockdev comparev and writev ...passed 00:07:43.548 Test: blockdev nvme passthru rw ...passed 00:07:43.548 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.548 Test: blockdev nvme admin passthru ...passed 00:07:43.548 Test: blockdev copy ...passed 00:07:43.548 Suite: bdevio tests on: Malloc1p0 00:07:43.548 Test: blockdev write read block ...passed 00:07:43.548 Test: blockdev write zeroes read block ...passed 00:07:43.548 Test: blockdev write zeroes read no split ...passed 00:07:43.548 Test: blockdev write zeroes read split ...passed 00:07:43.548 Test: blockdev write zeroes read split partial ...passed 00:07:43.548 Test: blockdev reset ...passed 00:07:43.548 Test: blockdev write read 8 blocks ...passed 00:07:43.548 Test: blockdev write read size > 128k ...passed 00:07:43.548 Test: blockdev write read invalid size ...passed 00:07:43.548 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.548 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.548 Test: blockdev write read max offset ...passed 00:07:43.548 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.548 Test: blockdev writev readv 8 blocks ...passed 00:07:43.548 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.548 Test: blockdev writev readv block ...passed 00:07:43.548 Test: blockdev writev readv size > 128k ...passed 00:07:43.548 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.548 Test: blockdev comparev and writev ...passed 00:07:43.548 Test: blockdev nvme passthru rw ...passed 00:07:43.548 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.548 Test: blockdev nvme admin passthru ...passed 00:07:43.548 Test: blockdev copy ...passed 00:07:43.548 Suite: bdevio tests on: Malloc0 00:07:43.548 Test: blockdev write read block ...passed 00:07:43.548 Test: blockdev write zeroes read block ...passed 00:07:43.548 Test: blockdev write zeroes read no split ...passed 00:07:43.548 Test: blockdev write zeroes read split ...passed 00:07:43.548 Test: blockdev write zeroes read split partial ...passed 00:07:43.548 Test: blockdev reset ...passed 00:07:43.548 Test: blockdev write read 8 blocks ...passed 00:07:43.548 Test: blockdev write read size > 128k ...passed 00:07:43.548 Test: blockdev write read invalid size ...passed 00:07:43.548 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.548 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.548 Test: blockdev write read max offset ...passed 00:07:43.548 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.548 Test: blockdev writev readv 8 blocks ...passed 00:07:43.548 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.548 Test: blockdev writev readv block ...passed 00:07:43.548 Test: blockdev writev readv size > 128k ...passed 00:07:43.548 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.548 Test: blockdev comparev and writev ...passed 00:07:43.548 Test: blockdev nvme passthru rw ...passed 00:07:43.548 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.548 Test: blockdev nvme admin passthru ...passed 00:07:43.548 Test: blockdev copy ...passed 00:07:43.548 00:07:43.548 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.548 suites 16 16 n/a 0 0 00:07:43.548 tests 368 368 368 0 0 00:07:43.548 asserts 2224 2224 2224 0 n/a 00:07:43.548 00:07:43.548 Elapsed time = 0.453 seconds 00:07:43.806 0 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1722091 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1722091 ']' 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1722091 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722091 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722091' 00:07:43.806 killing process with pid 1722091 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1722091 00:07:43.806 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1722091 00:07:44.065 10:15:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:44.065 00:07:44.065 real 0m1.436s 00:07:44.065 user 0m3.551s 00:07:44.065 sys 0m0.402s 00:07:44.065 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.065 10:15:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:44.065 ************************************ 00:07:44.065 END TEST bdev_bounds 00:07:44.065 ************************************ 00:07:44.065 10:15:08 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:44.065 10:15:08 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:44.065 10:15:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:44.065 10:15:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.065 10:15:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:44.065 ************************************ 00:07:44.065 START TEST bdev_nbd 00:07:44.065 ************************************ 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1722375 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1722375 /var/tmp/spdk-nbd.sock 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1722375 ']' 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.065 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.066 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.066 10:15:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:44.066 [2024-07-15 10:15:08.763429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.066 [2024-07-15 10:15:08.763472] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.066 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.324 [2024-07-15 10:15:08.854691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.324 [2024-07-15 10:15:08.928446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.324 [2024-07-15 10:15:09.064618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.324 [2024-07-15 10:15:09.064663] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:44.324 [2024-07-15 10:15:09.064672] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:44.324 [2024-07-15 10:15:09.072640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:44.324 [2024-07-15 10:15:09.072658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:44.324 [2024-07-15 10:15:09.080641] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:44.324 [2024-07-15 10:15:09.080656] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:44.582 [2024-07-15 10:15:09.148045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.582 [2024-07-15 10:15:09.148083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.582 [2024-07-15 10:15:09.148094] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x281dcc0 00:07:44.582 [2024-07-15 10:15:09.148102] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.582 [2024-07-15 10:15:09.149134] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.582 [2024-07-15 10:15:09.149157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:44.840 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.098 1+0 records in 00:07:45.098 1+0 records out 00:07:45.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222826 s, 18.4 MB/s 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.098 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.357 1+0 records in 00:07:45.357 1+0 records out 00:07:45.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239038 s, 17.1 MB/s 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.357 10:15:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.615 1+0 records in 00:07:45.615 1+0 records out 00:07:45.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224991 s, 18.2 MB/s 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.615 1+0 records in 00:07:45.615 1+0 records out 00:07:45.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321999 s, 12.7 MB/s 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.615 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:45.872 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.873 1+0 records in 00:07:45.873 1+0 records out 00:07:45.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335849 s, 12.2 MB/s 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.873 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.131 1+0 records in 00:07:46.131 1+0 records out 00:07:46.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303093 s, 13.5 MB/s 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.131 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.389 1+0 records in 00:07:46.389 1+0 records out 00:07:46.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037433 s, 10.9 MB/s 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.389 10:15:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:46.389 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:46.389 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.647 1+0 records in 00:07:46.647 1+0 records out 00:07:46.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379779 s, 10.8 MB/s 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.647 1+0 records in 00:07:46.647 1+0 records out 00:07:46.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338712 s, 12.1 MB/s 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.647 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:46.905 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.906 1+0 records in 00:07:46.906 1+0 records out 00:07:46.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043136 s, 9.5 MB/s 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.906 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:47.163 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.164 1+0 records in 00:07:47.164 1+0 records out 00:07:47.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045647 s, 9.0 MB/s 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.164 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:47.422 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:47.422 10:15:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.422 1+0 records in 00:07:47.422 1+0 records out 00:07:47.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546384 s, 7.5 MB/s 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:47.422 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.679 1+0 records in 00:07:47.679 1+0 records out 00:07:47.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456265 s, 9.0 MB/s 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.679 1+0 records in 00:07:47.679 1+0 records out 00:07:47.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404639 s, 10.1 MB/s 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.679 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.937 1+0 records in 00:07:47.937 1+0 records out 00:07:47.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534122 s, 7.7 MB/s 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.937 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.194 1+0 records in 00:07:48.194 1+0 records out 00:07:48.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549893 s, 7.4 MB/s 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:48.194 10:15:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.509 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:48.509 { 00:07:48.509 "nbd_device": "/dev/nbd0", 00:07:48.509 "bdev_name": "Malloc0" 00:07:48.509 }, 00:07:48.509 { 00:07:48.509 "nbd_device": "/dev/nbd1", 00:07:48.509 "bdev_name": "Malloc1p0" 00:07:48.509 }, 00:07:48.509 { 00:07:48.509 "nbd_device": "/dev/nbd2", 00:07:48.509 "bdev_name": "Malloc1p1" 00:07:48.509 }, 00:07:48.509 { 00:07:48.509 "nbd_device": "/dev/nbd3", 00:07:48.509 "bdev_name": "Malloc2p0" 00:07:48.509 }, 00:07:48.509 { 00:07:48.509 "nbd_device": "/dev/nbd4", 00:07:48.509 "bdev_name": "Malloc2p1" 00:07:48.509 }, 00:07:48.509 { 00:07:48.509 "nbd_device": "/dev/nbd5", 00:07:48.509 "bdev_name": "Malloc2p2" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd6", 00:07:48.510 "bdev_name": "Malloc2p3" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd7", 00:07:48.510 "bdev_name": "Malloc2p4" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd8", 00:07:48.510 "bdev_name": "Malloc2p5" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd9", 00:07:48.510 "bdev_name": "Malloc2p6" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd10", 00:07:48.510 "bdev_name": "Malloc2p7" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd11", 00:07:48.510 "bdev_name": "TestPT" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd12", 00:07:48.510 "bdev_name": "raid0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd13", 00:07:48.510 "bdev_name": "concat0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd14", 00:07:48.510 "bdev_name": "raid1" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd15", 00:07:48.510 "bdev_name": "AIO0" 00:07:48.510 } 00:07:48.510 ]' 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd0", 00:07:48.510 "bdev_name": "Malloc0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd1", 00:07:48.510 "bdev_name": "Malloc1p0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd2", 00:07:48.510 "bdev_name": "Malloc1p1" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd3", 00:07:48.510 "bdev_name": "Malloc2p0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd4", 00:07:48.510 "bdev_name": "Malloc2p1" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd5", 00:07:48.510 "bdev_name": "Malloc2p2" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd6", 00:07:48.510 "bdev_name": "Malloc2p3" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd7", 00:07:48.510 "bdev_name": "Malloc2p4" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd8", 00:07:48.510 "bdev_name": "Malloc2p5" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd9", 00:07:48.510 "bdev_name": "Malloc2p6" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd10", 00:07:48.510 "bdev_name": "Malloc2p7" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd11", 00:07:48.510 "bdev_name": "TestPT" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd12", 00:07:48.510 "bdev_name": "raid0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd13", 00:07:48.510 "bdev_name": "concat0" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd14", 00:07:48.510 "bdev_name": "raid1" 00:07:48.510 }, 00:07:48.510 { 00:07:48.510 "nbd_device": "/dev/nbd15", 00:07:48.510 "bdev_name": "AIO0" 00:07:48.510 } 00:07:48.510 ]' 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.510 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.772 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:49.030 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:49.030 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:49.030 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:49.030 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.031 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.287 10:15:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:49.545 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.803 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.061 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.319 10:15:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.319 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.578 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.837 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.096 10:15:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.355 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.613 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:51.614 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:51.872 /dev/nbd0 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.872 1+0 records in 00:07:51.872 1+0 records out 00:07:51.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149244 s, 27.4 MB/s 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:51.872 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:07:52.130 /dev/nbd1 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.130 1+0 records in 00:07:52.130 1+0 records out 00:07:52.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178505 s, 22.9 MB/s 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:07:52.130 /dev/nbd10 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.130 1+0 records in 00:07:52.130 1+0 records out 00:07:52.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280739 s, 14.6 MB/s 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.130 10:15:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:07:52.388 /dev/nbd11 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.388 1+0 records in 00:07:52.388 1+0 records out 00:07:52.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312162 s, 13.1 MB/s 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.388 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:07:52.645 /dev/nbd12 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.645 1+0 records in 00:07:52.645 1+0 records out 00:07:52.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363202 s, 11.3 MB/s 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.645 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:07:52.902 /dev/nbd13 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.902 1+0 records in 00:07:52.902 1+0 records out 00:07:52.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336751 s, 12.2 MB/s 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.902 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:07:53.159 /dev/nbd14 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.159 1+0 records in 00:07:53.159 1+0 records out 00:07:53.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333828 s, 12.3 MB/s 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.159 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:07:53.159 /dev/nbd15 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.422 1+0 records in 00:07:53.422 1+0 records out 00:07:53.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296181 s, 13.8 MB/s 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.422 10:15:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:07:53.422 /dev/nbd2 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.422 1+0 records in 00:07:53.422 1+0 records out 00:07:53.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378969 s, 10.8 MB/s 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.422 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:07:53.679 /dev/nbd3 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.679 1+0 records in 00:07:53.679 1+0 records out 00:07:53.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002595 s, 15.8 MB/s 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.679 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:07:53.937 /dev/nbd4 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.937 1+0 records in 00:07:53.937 1+0 records out 00:07:53.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408125 s, 10.0 MB/s 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.937 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:07:54.195 /dev/nbd5 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.195 1+0 records in 00:07:54.195 1+0 records out 00:07:54.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511813 s, 8.0 MB/s 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.195 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:07:54.195 /dev/nbd6 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.453 10:15:18 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.453 1+0 records in 00:07:54.453 1+0 records out 00:07:54.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464119 s, 8.8 MB/s 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:07:54.453 /dev/nbd7 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.453 1+0 records in 00:07:54.453 1+0 records out 00:07:54.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544343 s, 7.5 MB/s 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.453 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:07:54.711 /dev/nbd8 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.711 1+0 records in 00:07:54.711 1+0 records out 00:07:54.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396526 s, 10.3 MB/s 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.711 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:07:54.969 /dev/nbd9 00:07:54.969 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.970 1+0 records in 00:07:54.970 1+0 records out 00:07:54.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051713 s, 7.9 MB/s 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.970 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.232 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd0", 00:07:55.233 "bdev_name": "Malloc0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd1", 00:07:55.233 "bdev_name": "Malloc1p0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd10", 00:07:55.233 "bdev_name": "Malloc1p1" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd11", 00:07:55.233 "bdev_name": "Malloc2p0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd12", 00:07:55.233 "bdev_name": "Malloc2p1" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd13", 00:07:55.233 "bdev_name": "Malloc2p2" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd14", 00:07:55.233 "bdev_name": "Malloc2p3" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd15", 00:07:55.233 "bdev_name": "Malloc2p4" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd2", 00:07:55.233 "bdev_name": "Malloc2p5" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd3", 00:07:55.233 "bdev_name": "Malloc2p6" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd4", 00:07:55.233 "bdev_name": "Malloc2p7" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd5", 00:07:55.233 "bdev_name": "TestPT" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd6", 00:07:55.233 "bdev_name": "raid0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd7", 00:07:55.233 "bdev_name": "concat0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd8", 00:07:55.233 "bdev_name": "raid1" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd9", 00:07:55.233 "bdev_name": "AIO0" 00:07:55.233 } 00:07:55.233 ]' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd0", 00:07:55.233 "bdev_name": "Malloc0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd1", 00:07:55.233 "bdev_name": "Malloc1p0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd10", 00:07:55.233 "bdev_name": "Malloc1p1" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd11", 00:07:55.233 "bdev_name": "Malloc2p0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd12", 00:07:55.233 "bdev_name": "Malloc2p1" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd13", 00:07:55.233 "bdev_name": "Malloc2p2" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd14", 00:07:55.233 "bdev_name": "Malloc2p3" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd15", 00:07:55.233 "bdev_name": "Malloc2p4" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd2", 00:07:55.233 "bdev_name": "Malloc2p5" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd3", 00:07:55.233 "bdev_name": "Malloc2p6" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd4", 00:07:55.233 "bdev_name": "Malloc2p7" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd5", 00:07:55.233 "bdev_name": "TestPT" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd6", 00:07:55.233 "bdev_name": "raid0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd7", 00:07:55.233 "bdev_name": "concat0" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd8", 00:07:55.233 "bdev_name": "raid1" 00:07:55.233 }, 00:07:55.233 { 00:07:55.233 "nbd_device": "/dev/nbd9", 00:07:55.233 "bdev_name": "AIO0" 00:07:55.233 } 00:07:55.233 ]' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:55.233 /dev/nbd1 00:07:55.233 /dev/nbd10 00:07:55.233 /dev/nbd11 00:07:55.233 /dev/nbd12 00:07:55.233 /dev/nbd13 00:07:55.233 /dev/nbd14 00:07:55.233 /dev/nbd15 00:07:55.233 /dev/nbd2 00:07:55.233 /dev/nbd3 00:07:55.233 /dev/nbd4 00:07:55.233 /dev/nbd5 00:07:55.233 /dev/nbd6 00:07:55.233 /dev/nbd7 00:07:55.233 /dev/nbd8 00:07:55.233 /dev/nbd9' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:55.233 /dev/nbd1 00:07:55.233 /dev/nbd10 00:07:55.233 /dev/nbd11 00:07:55.233 /dev/nbd12 00:07:55.233 /dev/nbd13 00:07:55.233 /dev/nbd14 00:07:55.233 /dev/nbd15 00:07:55.233 /dev/nbd2 00:07:55.233 /dev/nbd3 00:07:55.233 /dev/nbd4 00:07:55.233 /dev/nbd5 00:07:55.233 /dev/nbd6 00:07:55.233 /dev/nbd7 00:07:55.233 /dev/nbd8 00:07:55.233 /dev/nbd9' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:55.233 256+0 records in 00:07:55.233 256+0 records out 00:07:55.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114326 s, 91.7 MB/s 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.233 10:15:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:55.492 256+0 records in 00:07:55.492 256+0 records out 00:07:55.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113839 s, 9.2 MB/s 00:07:55.492 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.492 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:55.492 256+0 records in 00:07:55.492 256+0 records out 00:07:55.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118356 s, 8.9 MB/s 00:07:55.492 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.492 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:55.492 256+0 records in 00:07:55.492 256+0 records out 00:07:55.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119487 s, 8.8 MB/s 00:07:55.492 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.492 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:55.752 256+0 records in 00:07:55.752 256+0 records out 00:07:55.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118084 s, 8.9 MB/s 00:07:55.752 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.752 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:55.752 256+0 records in 00:07:55.752 256+0 records out 00:07:55.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117762 s, 8.9 MB/s 00:07:55.752 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.752 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:56.010 256+0 records in 00:07:56.010 256+0 records out 00:07:56.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11502 s, 9.1 MB/s 00:07:56.010 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.010 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:56.010 256+0 records in 00:07:56.010 256+0 records out 00:07:56.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114788 s, 9.1 MB/s 00:07:56.010 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.010 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:07:56.269 256+0 records in 00:07:56.269 256+0 records out 00:07:56.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115181 s, 9.1 MB/s 00:07:56.269 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.269 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:07:56.269 256+0 records in 00:07:56.269 256+0 records out 00:07:56.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117585 s, 8.9 MB/s 00:07:56.269 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.269 10:15:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:07:56.528 256+0 records in 00:07:56.528 256+0 records out 00:07:56.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117752 s, 8.9 MB/s 00:07:56.528 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.528 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:07:56.528 256+0 records in 00:07:56.528 256+0 records out 00:07:56.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116581 s, 9.0 MB/s 00:07:56.528 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.528 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:07:56.787 256+0 records in 00:07:56.787 256+0 records out 00:07:56.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114617 s, 9.1 MB/s 00:07:56.787 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.787 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:07:56.787 256+0 records in 00:07:56.787 256+0 records out 00:07:56.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114927 s, 9.1 MB/s 00:07:56.787 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.787 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:07:57.046 256+0 records in 00:07:57.046 256+0 records out 00:07:57.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118121 s, 8.9 MB/s 00:07:57.046 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.046 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:07:57.046 256+0 records in 00:07:57.046 256+0 records out 00:07:57.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117748 s, 8.9 MB/s 00:07:57.046 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:57.046 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:07:57.305 256+0 records in 00:07:57.305 256+0 records out 00:07:57.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113378 s, 9.2 MB/s 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.305 10:15:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.565 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.824 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.083 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.342 10:15:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.601 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.859 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.117 10:15:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.376 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.635 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:59.894 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:59.894 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:59.894 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:59.894 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.894 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.894 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.895 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.154 10:15:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.413 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:00.671 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:00.672 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:00.672 malloc_lvol_verify 00:08:01.006 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:01.006 b77f311d-b000-4358-bd96-27ee46c79e6c 00:08:01.006 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:01.263 0c63922d-2c29-4a35-9b52-52cf9d542b1f 00:08:01.263 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:01.263 /dev/nbd0 00:08:01.263 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:01.263 mke2fs 1.46.5 (30-Dec-2021) 00:08:01.263 Discarding device blocks: 0/4096 done 00:08:01.263 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:01.263 00:08:01.263 Allocating group tables: 0/1 done 00:08:01.263 Writing inode tables: 0/1 done 00:08:01.263 Creating journal (1024 blocks): done 00:08:01.264 Writing superblocks and filesystem accounting information: 0/1 done 00:08:01.264 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.264 10:15:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1722375 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1722375 ']' 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1722375 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1722375 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1722375' 00:08:01.522 killing process with pid 1722375 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1722375 00:08:01.522 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1722375 00:08:02.089 10:15:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:02.089 00:08:02.089 real 0m17.922s 00:08:02.089 user 0m21.441s 00:08:02.089 sys 0m10.571s 00:08:02.089 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.089 10:15:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:02.089 ************************************ 00:08:02.089 END TEST bdev_nbd 00:08:02.089 ************************************ 00:08:02.089 10:15:26 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:02.089 10:15:26 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:02.089 10:15:26 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:08:02.089 10:15:26 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:08:02.089 10:15:26 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:08:02.089 10:15:26 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:02.089 10:15:26 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.089 10:15:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:02.089 ************************************ 00:08:02.089 START TEST bdev_fio 00:08:02.089 ************************************ 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:02.089 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:08:02.089 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.090 10:15:26 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:02.090 ************************************ 00:08:02.090 START TEST bdev_fio_rw_verify 00:08:02.090 ************************************ 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:08:02.090 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:02.347 10:15:26 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:02.605 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.605 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.605 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:02.606 fio-3.35 00:08:02.606 Starting 16 threads 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:02.606 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:02.606 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:14.798 00:08:14.798 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=1726304: Mon Jul 15 10:15:37 2024 00:08:14.799 read: IOPS=113k, BW=442MiB/s (463MB/s)(4419MiB/10001msec) 00:08:14.799 slat (nsec): min=1851, max=214558, avg=27764.76, stdev=12267.49 00:08:14.799 clat (usec): min=8, max=1294, avg=241.68, stdev=117.93 00:08:14.799 lat (usec): min=17, max=1409, avg=269.45, stdev=123.86 00:08:14.799 clat percentiles (usec): 00:08:14.799 | 50.000th=[ 231], 99.000th=[ 537], 99.900th=[ 627], 99.990th=[ 848], 00:08:14.799 | 99.999th=[ 1172] 00:08:14.799 write: IOPS=180k, BW=703MiB/s (737MB/s)(6931MiB/9865msec); 0 zone resets 00:08:14.799 slat (usec): min=3, max=3432, avg=37.16, stdev=12.89 00:08:14.799 clat (usec): min=9, max=3754, avg=276.69, stdev=130.78 00:08:14.799 lat (usec): min=28, max=3776, avg=313.85, stdev=137.26 00:08:14.799 clat percentiles (usec): 00:08:14.799 | 50.000th=[ 265], 99.000th=[ 644], 99.900th=[ 857], 99.990th=[ 996], 00:08:14.799 | 99.999th=[ 1631] 00:08:14.799 bw ( KiB/s): min=588504, max=968931, per=99.13%, avg=713164.37, stdev=6576.80, samples=304 00:08:14.799 iops : min=147126, max=242230, avg=178290.89, stdev=1644.17, samples=304 00:08:14.799 lat (usec) : 10=0.01%, 20=0.04%, 50=1.10%, 100=7.26%, 250=41.89% 00:08:14.799 lat (usec) : 500=45.96%, 750=3.46%, 1000=0.28% 00:08:14.799 lat (msec) : 2=0.01%, 4=0.01% 00:08:14.799 cpu : usr=99.29%, sys=0.36%, ctx=680, majf=0, minf=2484 00:08:14.799 IO depths : 1=12.4%, 2=24.8%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:14.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:14.799 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:14.799 issued rwts: total=1131291,1774252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:14.799 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:14.799 00:08:14.799 Run status group 0 (all jobs): 00:08:14.799 READ: bw=442MiB/s (463MB/s), 442MiB/s-442MiB/s (463MB/s-463MB/s), io=4419MiB (4634MB), run=10001-10001msec 00:08:14.799 WRITE: bw=703MiB/s (737MB/s), 703MiB/s-703MiB/s (737MB/s-737MB/s), io=6931MiB (7267MB), run=9865-9865msec 00:08:14.799 00:08:14.799 real 0m11.516s 00:08:14.799 user 2m50.746s 00:08:14.799 sys 0m1.487s 00:08:14.799 10:15:38 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.799 10:15:38 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:08:14.799 ************************************ 00:08:14.799 END TEST bdev_fio_rw_verify 00:08:14.799 ************************************ 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:08:14.799 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:14.800 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ddd6504a-87fc-47aa-afd4-b9545d6edec4"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ddd6504a-87fc-47aa-afd4-b9545d6edec4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "32620a6e-f405-586b-ac66-ad5b847482ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "32620a6e-f405-586b-ac66-ad5b847482ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a638cb23-cd6f-55d2-a0c5-188394f36c8d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a638cb23-cd6f-55d2-a0c5-188394f36c8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bc997c5f-d494-5035-b4d8-985641695eae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc997c5f-d494-5035-b4d8-985641695eae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cb87c7dd-b7cd-55f7-82c2-fa887bcbfa96"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cb87c7dd-b7cd-55f7-82c2-fa887bcbfa96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "24d592a6-f6a0-55b8-ba9d-99043da6e2f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "24d592a6-f6a0-55b8-ba9d-99043da6e2f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ae53d14d-8b51-5a44-926e-d2d6d440e1be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae53d14d-8b51-5a44-926e-d2d6d440e1be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "fac7d6b0-7ec7-5ce4-8314-d07b94dd3e31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fac7d6b0-7ec7-5ce4-8314-d07b94dd3e31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "7de92a74-e1fd-5c97-add6-2e1f2e8329ac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7de92a74-e1fd-5c97-add6-2e1f2e8329ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e34bfb11-0013-58d1-8274-f39796870cf9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e34bfb11-0013-58d1-8274-f39796870cf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "3bf904c4-28e1-5cef-aef5-ba9607b5fa9b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3bf904c4-28e1-5cef-aef5-ba9607b5fa9b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "99d9178b-a8af-5939-9239-1ab5bff29dc2"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "99d9178b-a8af-5939-9239-1ab5bff29dc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f993a620-5c76-42b3-83a3-6352c43f0d6d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f993a620-5c76-42b3-83a3-6352c43f0d6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f993a620-5c76-42b3-83a3-6352c43f0d6d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "f394658b-651e-43ba-b2c1-a0c22ec38d66",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9459c162-285d-4996-9652-7d04c190b8cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "687c9fe0-5942-4f4b-9a57-331031b2bc77"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "687c9fe0-5942-4f4b-9a57-331031b2bc77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "687c9fe0-5942-4f4b-9a57-331031b2bc77",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "64b143c0-f9a5-44bd-8e69-3e4a834a6827",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "3a87068c-7fe3-4210-bde9-1e76ab355f26",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "13054c09-cc8c-42c7-9f60-30aef3cc41b6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "13054c09-cc8c-42c7-9f60-30aef3cc41b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "13054c09-cc8c-42c7-9f60-30aef3cc41b6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b6ea9623-d892-4268-8caa-f9e9a6e6fd78",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a5186dfe-c67c-417c-8fde-4054a9622849",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0bf72cda-f0a7-4fed-9447-e546ac5e82ea"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0bf72cda-f0a7-4fed-9447-e546ac5e82ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:14.801 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:08:14.801 Malloc1p0 00:08:14.801 Malloc1p1 00:08:14.801 Malloc2p0 00:08:14.801 Malloc2p1 00:08:14.801 Malloc2p2 00:08:14.801 Malloc2p3 00:08:14.801 Malloc2p4 00:08:14.801 Malloc2p5 00:08:14.801 Malloc2p6 00:08:14.801 Malloc2p7 00:08:14.801 TestPT 00:08:14.801 raid0 00:08:14.801 concat0 ]] 00:08:14.801 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "ddd6504a-87fc-47aa-afd4-b9545d6edec4"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ddd6504a-87fc-47aa-afd4-b9545d6edec4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "32620a6e-f405-586b-ac66-ad5b847482ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "32620a6e-f405-586b-ac66-ad5b847482ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a638cb23-cd6f-55d2-a0c5-188394f36c8d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a638cb23-cd6f-55d2-a0c5-188394f36c8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bc997c5f-d494-5035-b4d8-985641695eae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bc997c5f-d494-5035-b4d8-985641695eae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "cb87c7dd-b7cd-55f7-82c2-fa887bcbfa96"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cb87c7dd-b7cd-55f7-82c2-fa887bcbfa96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "24d592a6-f6a0-55b8-ba9d-99043da6e2f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "24d592a6-f6a0-55b8-ba9d-99043da6e2f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ae53d14d-8b51-5a44-926e-d2d6d440e1be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae53d14d-8b51-5a44-926e-d2d6d440e1be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "fac7d6b0-7ec7-5ce4-8314-d07b94dd3e31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fac7d6b0-7ec7-5ce4-8314-d07b94dd3e31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "7de92a74-e1fd-5c97-add6-2e1f2e8329ac"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7de92a74-e1fd-5c97-add6-2e1f2e8329ac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e34bfb11-0013-58d1-8274-f39796870cf9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e34bfb11-0013-58d1-8274-f39796870cf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "3bf904c4-28e1-5cef-aef5-ba9607b5fa9b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3bf904c4-28e1-5cef-aef5-ba9607b5fa9b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "99d9178b-a8af-5939-9239-1ab5bff29dc2"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "99d9178b-a8af-5939-9239-1ab5bff29dc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "f993a620-5c76-42b3-83a3-6352c43f0d6d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f993a620-5c76-42b3-83a3-6352c43f0d6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "f993a620-5c76-42b3-83a3-6352c43f0d6d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "f394658b-651e-43ba-b2c1-a0c22ec38d66",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "9459c162-285d-4996-9652-7d04c190b8cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "687c9fe0-5942-4f4b-9a57-331031b2bc77"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "687c9fe0-5942-4f4b-9a57-331031b2bc77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "687c9fe0-5942-4f4b-9a57-331031b2bc77",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "64b143c0-f9a5-44bd-8e69-3e4a834a6827",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "3a87068c-7fe3-4210-bde9-1e76ab355f26",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "13054c09-cc8c-42c7-9f60-30aef3cc41b6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "13054c09-cc8c-42c7-9f60-30aef3cc41b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "13054c09-cc8c-42c7-9f60-30aef3cc41b6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b6ea9623-d892-4268-8caa-f9e9a6e6fd78",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "a5186dfe-c67c-417c-8fde-4054a9622849",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0bf72cda-f0a7-4fed-9447-e546ac5e82ea"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0bf72cda-f0a7-4fed-9447-e546ac5e82ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:08:14.802 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.803 10:15:38 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:14.803 ************************************ 00:08:14.803 START TEST bdev_fio_trim 00:08:14.803 ************************************ 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:14.803 10:15:38 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.803 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.803 fio-3.35 00:08:14.803 Starting 14 threads 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.803 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:14.803 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:14.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:14.804 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.008 00:08:27.008 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=1728469: Mon Jul 15 10:15:49 2024 00:08:27.008 write: IOPS=156k, BW=611MiB/s (641MB/s)(6113MiB/10001msec); 0 zone resets 00:08:27.008 slat (nsec): min=1859, max=3708.2k, avg=30556.91, stdev=10159.35 00:08:27.008 clat (usec): min=18, max=1746, avg=229.08, stdev=84.05 00:08:27.008 lat (usec): min=23, max=4261, avg=259.64, stdev=88.25 00:08:27.008 clat percentiles (usec): 00:08:27.008 | 50.000th=[ 219], 99.000th=[ 445], 99.900th=[ 545], 99.990th=[ 725], 00:08:27.008 | 99.999th=[ 979] 00:08:27.008 bw ( KiB/s): min=529280, max=927395, per=100.00%, avg=628574.47, stdev=7457.14, samples=266 00:08:27.008 iops : min=132320, max=231846, avg=157143.47, stdev=1864.26, samples=266 00:08:27.008 trim: IOPS=156k, BW=611MiB/s (641MB/s)(6113MiB/10001msec); 0 zone resets 00:08:27.008 slat (usec): min=3, max=438, avg=21.39, stdev= 6.75 00:08:27.008 clat (usec): min=3, max=4261, avg=253.79, stdev=92.39 00:08:27.008 lat (usec): min=8, max=4296, avg=275.19, stdev=96.01 00:08:27.008 clat percentiles (usec): 00:08:27.008 | 50.000th=[ 245], 99.000th=[ 478], 99.900th=[ 586], 99.990th=[ 750], 00:08:27.008 | 99.999th=[ 1004] 00:08:27.008 bw ( KiB/s): min=529280, max=927395, per=100.00%, avg=628574.89, stdev=7457.20, samples=266 00:08:27.008 iops : min=132320, max=231846, avg=157143.58, stdev=1864.27, samples=266 00:08:27.008 lat (usec) : 4=0.01%, 10=0.02%, 20=0.07%, 50=0.30%, 100=2.96% 00:08:27.008 lat (usec) : 250=54.00%, 500=42.19%, 750=0.44%, 1000=0.01% 00:08:27.008 lat (msec) : 2=0.01%, 10=0.01% 00:08:27.008 cpu : usr=99.65%, sys=0.00%, ctx=533, majf=0, minf=701 00:08:27.008 IO depths : 1=12.4%, 2=24.9%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:27.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.008 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.008 issued rwts: total=0,1564985,1564988,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:27.008 00:08:27.008 Run status group 0 (all jobs): 00:08:27.008 WRITE: bw=611MiB/s (641MB/s), 611MiB/s-611MiB/s (641MB/s-641MB/s), io=6113MiB (6410MB), run=10001-10001msec 00:08:27.008 TRIM: bw=611MiB/s (641MB/s), 611MiB/s-611MiB/s (641MB/s-641MB/s), io=6113MiB (6410MB), run=10001-10001msec 00:08:27.008 00:08:27.008 real 0m11.455s 00:08:27.008 user 2m31.036s 00:08:27.008 sys 0m0.598s 00:08:27.008 10:15:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.008 10:15:49 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:08:27.008 ************************************ 00:08:27.008 END TEST bdev_fio_trim 00:08:27.008 ************************************ 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:08:27.008 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:08:27.008 00:08:27.008 real 0m23.310s 00:08:27.008 user 5m21.974s 00:08:27.008 sys 0m2.265s 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.008 10:15:50 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:08:27.008 ************************************ 00:08:27.008 END TEST bdev_fio 00:08:27.008 ************************************ 00:08:27.008 10:15:50 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:27.008 10:15:50 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:27.008 10:15:50 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:27.008 10:15:50 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:27.008 10:15:50 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.008 10:15:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:27.008 ************************************ 00:08:27.008 START TEST bdev_verify 00:08:27.008 ************************************ 00:08:27.008 10:15:50 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:27.008 [2024-07-15 10:15:50.163720] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:27.008 [2024-07-15 10:15:50.163764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730392 ] 00:08:27.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.008 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:27.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.008 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:27.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:27.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.009 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.009 [2024-07-15 10:15:50.255126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.009 [2024-07-15 10:15:50.330450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.009 [2024-07-15 10:15:50.330453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.009 [2024-07-15 10:15:50.465062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:27.009 [2024-07-15 10:15:50.465114] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:27.009 [2024-07-15 10:15:50.465125] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:27.009 [2024-07-15 10:15:50.473059] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:27.009 [2024-07-15 10:15:50.473081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:27.009 [2024-07-15 10:15:50.481075] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:27.009 [2024-07-15 10:15:50.481093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:27.009 [2024-07-15 10:15:50.548987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:27.009 [2024-07-15 10:15:50.549031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.009 [2024-07-15 10:15:50.549048] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25da0c0 00:08:27.009 [2024-07-15 10:15:50.549058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.009 [2024-07-15 10:15:50.550043] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.009 [2024-07-15 10:15:50.550067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:27.009 Running I/O for 5 seconds... 00:08:31.201 00:08:31.201 Latency(us) 00:08:31.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.201 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x1000 00:08:31.201 Malloc0 : 5.12 1673.46 6.54 0.00 0.00 76367.03 416.15 283534.95 00:08:31.201 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x1000 length 0x1000 00:08:31.201 Malloc0 : 5.12 1651.14 6.45 0.00 0.00 77396.34 353.89 315411.66 00:08:31.201 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x800 00:08:31.201 Malloc1p0 : 5.16 867.93 3.39 0.00 0.00 146873.85 2569.01 156028.11 00:08:31.201 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x800 length 0x800 00:08:31.201 Malloc1p0 : 5.15 869.57 3.40 0.00 0.00 146618.39 2569.01 156866.97 00:08:31.201 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x800 00:08:31.201 Malloc1p1 : 5.16 867.59 3.39 0.00 0.00 146648.57 2582.12 155189.25 00:08:31.201 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x800 length 0x800 00:08:31.201 Malloc1p1 : 5.15 869.33 3.40 0.00 0.00 146352.60 2555.90 155189.25 00:08:31.201 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p0 : 5.17 867.26 3.39 0.00 0.00 146418.03 2555.90 153511.53 00:08:31.201 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p0 : 5.15 869.07 3.39 0.00 0.00 146100.41 2555.90 153511.53 00:08:31.201 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p1 : 5.17 866.96 3.39 0.00 0.00 146197.29 2595.23 152672.67 00:08:31.201 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p1 : 5.16 868.81 3.39 0.00 0.00 145875.52 2569.01 152672.67 00:08:31.201 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p2 : 5.17 866.58 3.39 0.00 0.00 145983.50 2503.48 150994.94 00:08:31.201 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p2 : 5.16 868.55 3.39 0.00 0.00 145636.83 2542.80 150994.94 00:08:31.201 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p3 : 5.17 866.14 3.38 0.00 0.00 145761.30 2582.12 149317.22 00:08:31.201 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p3 : 5.16 868.28 3.39 0.00 0.00 145387.21 2569.01 149317.22 00:08:31.201 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p4 : 5.17 865.74 3.38 0.00 0.00 145546.26 2516.58 145961.78 00:08:31.201 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p4 : 5.16 868.00 3.39 0.00 0.00 145145.23 2516.58 145961.78 00:08:31.201 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p5 : 5.18 865.51 3.38 0.00 0.00 145296.80 2503.48 143445.20 00:08:31.201 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p5 : 5.16 867.66 3.39 0.00 0.00 144918.11 2503.48 144284.06 00:08:31.201 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p6 : 5.18 865.28 3.38 0.00 0.00 145044.01 2660.76 141767.48 00:08:31.201 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p6 : 5.17 867.34 3.39 0.00 0.00 144680.52 2660.76 141767.48 00:08:31.201 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x200 00:08:31.201 Malloc2p7 : 5.18 865.04 3.38 0.00 0.00 144796.65 2569.01 140928.61 00:08:31.201 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x200 length 0x200 00:08:31.201 Malloc2p7 : 5.17 867.03 3.39 0.00 0.00 144441.13 2529.69 140928.61 00:08:31.201 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x1000 00:08:31.201 TestPT : 5.19 863.01 3.37 0.00 0.00 144785.34 10013.90 140928.61 00:08:31.201 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x1000 length 0x1000 00:08:31.201 TestPT : 5.18 842.36 3.29 0.00 0.00 147991.29 12949.91 187904.82 00:08:31.201 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x2000 00:08:31.201 raid0 : 5.18 864.54 3.38 0.00 0.00 144140.29 2660.76 125829.12 00:08:31.201 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x2000 length 0x2000 00:08:31.201 raid0 : 5.17 866.54 3.38 0.00 0.00 143792.09 2673.87 120795.96 00:08:31.201 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x2000 00:08:31.201 concat0 : 5.19 864.01 3.38 0.00 0.00 143936.54 2739.40 120795.96 00:08:31.201 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x2000 length 0x2000 00:08:31.201 concat0 : 5.17 866.10 3.38 0.00 0.00 143576.90 2739.40 117440.51 00:08:31.201 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x1000 00:08:31.201 raid1 : 5.19 863.73 3.37 0.00 0.00 143659.89 3198.16 114085.07 00:08:31.201 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x1000 length 0x1000 00:08:31.201 raid1 : 5.19 888.15 3.47 0.00 0.00 139709.65 1782.58 109890.76 00:08:31.201 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x0 length 0x4e2 00:08:31.201 AIO0 : 5.19 887.34 3.47 0.00 0.00 139522.51 475.14 112407.35 00:08:31.201 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.201 Verification LBA range: start 0x4e2 length 0x4e2 00:08:31.201 AIO0 : 5.19 887.97 3.47 0.00 0.00 139405.46 1317.27 111568.49 00:08:31.201 =================================================================================================================== 00:08:31.201 Total : 29366.03 114.71 0.00 0.00 137168.75 353.89 315411.66 00:08:31.770 00:08:31.770 real 0m6.175s 00:08:31.770 user 0m11.622s 00:08:31.770 sys 0m0.318s 00:08:31.770 10:15:56 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.770 10:15:56 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:31.770 ************************************ 00:08:31.770 END TEST bdev_verify 00:08:31.770 ************************************ 00:08:31.770 10:15:56 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:31.770 10:15:56 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:31.770 10:15:56 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:31.770 10:15:56 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.770 10:15:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:31.770 ************************************ 00:08:31.770 START TEST bdev_verify_big_io 00:08:31.770 ************************************ 00:08:31.771 10:15:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:31.771 [2024-07-15 10:15:56.413554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:31.771 [2024-07-15 10:15:56.413595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731462 ] 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:31.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:31.771 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:31.771 [2024-07-15 10:15:56.502709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.031 [2024-07-15 10:15:56.574452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.031 [2024-07-15 10:15:56.574455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.031 [2024-07-15 10:15:56.708659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:32.031 [2024-07-15 10:15:56.708706] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:32.031 [2024-07-15 10:15:56.708718] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:32.031 [2024-07-15 10:15:56.716668] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:32.031 [2024-07-15 10:15:56.716688] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:32.031 [2024-07-15 10:15:56.724683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:32.031 [2024-07-15 10:15:56.724701] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:32.031 [2024-07-15 10:15:56.792498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:32.031 [2024-07-15 10:15:56.792540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.031 [2024-07-15 10:15:56.792556] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15f70c0 00:08:32.031 [2024-07-15 10:15:56.792566] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.031 [2024-07-15 10:15:56.793546] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.031 [2024-07-15 10:15:56.793572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:32.290 [2024-07-15 10:15:56.941161] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:32.290 [2024-07-15 10:15:56.941969] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:32.290 [2024-07-15 10:15:56.943184] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:32.290 [2024-07-15 10:15:56.943963] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:32.290 [2024-07-15 10:15:56.945200] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.945997] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.947219] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.948467] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.949263] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.950496] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.951288] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.952546] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.953336] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.954574] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.955312] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.956427] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:32.291 [2024-07-15 10:15:56.975031] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:32.291 [2024-07-15 10:15:56.976681] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:32.291 Running I/O for 5 seconds... 00:08:38.890 00:08:38.890 Latency(us) 00:08:38.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.890 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x100 00:08:38.890 Malloc0 : 5.49 303.30 18.96 0.00 0.00 415969.64 557.06 1221381.32 00:08:38.890 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x100 length 0x100 00:08:38.890 Malloc0 : 6.23 328.95 20.56 0.00 0.00 315289.55 547.23 399297.74 00:08:38.890 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x80 00:08:38.890 Malloc1p0 : 6.02 53.18 3.32 0.00 0.00 2256454.81 1133.77 3597035.11 00:08:38.890 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x80 length 0x80 00:08:38.890 Malloc1p0 : 5.96 132.10 8.26 0.00 0.00 911690.94 1821.90 1879048.19 00:08:38.890 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x80 00:08:38.890 Malloc1p1 : 6.02 53.17 3.32 0.00 0.00 2205381.95 1002.70 3489660.93 00:08:38.890 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x80 length 0x80 00:08:38.890 Malloc1p1 : 6.23 48.76 3.05 0.00 0.00 2458857.05 1159.99 3784939.93 00:08:38.890 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p0 : 5.80 41.37 2.59 0.00 0.00 711485.87 511.18 1342177.28 00:08:38.890 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p0 : 5.97 37.54 2.35 0.00 0.00 800686.97 462.03 1409286.14 00:08:38.890 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p1 : 5.80 41.36 2.59 0.00 0.00 707265.58 625.87 1322044.62 00:08:38.890 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p1 : 5.97 37.54 2.35 0.00 0.00 796275.19 455.48 1395864.37 00:08:38.890 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p2 : 5.80 41.35 2.58 0.00 0.00 702797.97 560.33 1308622.85 00:08:38.890 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p2 : 5.97 37.53 2.35 0.00 0.00 791877.36 452.20 1375731.71 00:08:38.890 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p3 : 5.80 41.35 2.58 0.00 0.00 698731.89 475.14 1288490.19 00:08:38.890 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p3 : 5.97 37.53 2.35 0.00 0.00 787423.19 455.48 1362309.94 00:08:38.890 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p4 : 5.81 41.34 2.58 0.00 0.00 694239.10 442.37 1268357.53 00:08:38.890 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p4 : 5.97 37.52 2.34 0.00 0.00 783150.97 517.73 1348888.17 00:08:38.890 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p5 : 5.81 41.33 2.58 0.00 0.00 689921.01 484.97 1254935.76 00:08:38.890 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p5 : 5.97 37.51 2.34 0.00 0.00 778764.27 645.53 1328755.51 00:08:38.890 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p6 : 5.86 43.71 2.73 0.00 0.00 653527.10 468.58 1234803.10 00:08:38.890 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p6 : 5.97 37.51 2.34 0.00 0.00 774072.36 566.89 1315333.73 00:08:38.890 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x20 00:08:38.890 Malloc2p7 : 5.86 43.70 2.73 0.00 0.00 649517.91 468.58 1221381.32 00:08:38.890 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x20 length 0x20 00:08:38.890 Malloc2p7 : 5.97 37.50 2.34 0.00 0.00 769739.90 488.24 1295201.08 00:08:38.890 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x100 00:08:38.890 TestPT : 6.17 57.02 3.56 0.00 0.00 1913954.84 1389.36 3234647.24 00:08:38.890 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x100 length 0x100 00:08:38.890 TestPT : 6.26 48.92 3.06 0.00 0.00 2322643.81 127506.84 3154116.61 00:08:38.890 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x200 00:08:38.890 raid0 : 5.98 61.49 3.84 0.00 0.00 1767316.39 1159.99 3127273.06 00:08:38.890 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x200 length 0x200 00:08:38.890 raid0 : 6.25 51.17 3.20 0.00 0.00 2168512.81 1395.92 3435973.84 00:08:38.890 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x200 00:08:38.890 concat0 : 6.09 66.70 4.17 0.00 0.00 1588115.83 1454.90 3006477.11 00:08:38.890 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x200 length 0x200 00:08:38.890 concat0 : 6.26 53.70 3.36 0.00 0.00 2037899.39 1153.43 3315177.88 00:08:38.890 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x100 00:08:38.890 raid1 : 6.21 82.46 5.15 0.00 0.00 1272051.69 1356.60 2899102.92 00:08:38.890 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x100 length 0x100 00:08:38.890 raid1 : 6.23 56.47 3.53 0.00 0.00 1905587.46 1808.79 3207803.70 00:08:38.890 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x0 length 0x4e 00:08:38.890 AIO0 : 6.21 79.21 4.95 0.00 0.00 792984.24 576.72 1771674.01 00:08:38.890 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:38.890 Verification LBA range: start 0x4e length 0x4e 00:08:38.890 AIO0 : 6.23 56.50 3.53 0.00 0.00 1142066.76 579.99 1959578.83 00:08:38.890 =================================================================================================================== 00:08:38.890 Total : 2168.82 135.55 0.00 0.00 1013081.51 442.37 3784939.93 00:08:38.890 00:08:38.890 real 0m7.253s 00:08:38.890 user 0m13.752s 00:08:38.890 sys 0m0.346s 00:08:38.890 10:16:03 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.890 10:16:03 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:38.890 ************************************ 00:08:38.890 END TEST bdev_verify_big_io 00:08:38.890 ************************************ 00:08:38.890 10:16:03 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:38.890 10:16:03 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.890 10:16:03 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:38.890 10:16:03 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.890 10:16:03 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:39.149 ************************************ 00:08:39.149 START TEST bdev_write_zeroes 00:08:39.149 ************************************ 00:08:39.149 10:16:03 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.149 [2024-07-15 10:16:03.747968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:39.149 [2024-07-15 10:16:03.748014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1732645 ] 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:39.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.149 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:39.149 [2024-07-15 10:16:03.840389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.149 [2024-07-15 10:16:03.909649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.407 [2024-07-15 10:16:04.043814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:39.407 [2024-07-15 10:16:04.043868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:39.407 [2024-07-15 10:16:04.043881] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:39.407 [2024-07-15 10:16:04.051826] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:39.407 [2024-07-15 10:16:04.051851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:39.407 [2024-07-15 10:16:04.059835] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:39.407 [2024-07-15 10:16:04.059856] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:39.407 [2024-07-15 10:16:04.127365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:39.407 [2024-07-15 10:16:04.127411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.407 [2024-07-15 10:16:04.127428] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfaf2d0 00:08:39.407 [2024-07-15 10:16:04.127438] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.407 [2024-07-15 10:16:04.128382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.407 [2024-07-15 10:16:04.128409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:39.666 Running I/O for 1 seconds... 00:08:40.603 00:08:40.603 Latency(us) 00:08:40.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.603 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc0 : 1.02 7915.44 30.92 0.00 0.00 16161.55 462.03 28730.98 00:08:40.603 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc1p0 : 1.02 7908.39 30.89 0.00 0.00 16157.36 625.87 28101.84 00:08:40.603 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc1p1 : 1.02 7901.36 30.86 0.00 0.00 16147.21 619.32 27472.69 00:08:40.603 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p0 : 1.02 7894.34 30.84 0.00 0.00 16137.24 619.32 26843.55 00:08:40.603 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p1 : 1.04 7913.39 30.91 0.00 0.00 16075.59 622.59 26214.40 00:08:40.603 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p2 : 1.04 7906.47 30.88 0.00 0.00 16065.17 612.76 25585.25 00:08:40.603 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p3 : 1.04 7899.56 30.86 0.00 0.00 16053.53 622.59 24956.11 00:08:40.603 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p4 : 1.04 7892.74 30.83 0.00 0.00 16039.51 612.76 24222.11 00:08:40.603 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p5 : 1.04 7885.92 30.80 0.00 0.00 16028.01 619.32 23592.96 00:08:40.603 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p6 : 1.04 7879.07 30.78 0.00 0.00 16021.91 612.76 22963.81 00:08:40.603 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 Malloc2p7 : 1.04 7872.26 30.75 0.00 0.00 16012.24 619.32 22229.81 00:08:40.603 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 TestPT : 1.04 7865.47 30.72 0.00 0.00 16005.50 648.81 21495.81 00:08:40.603 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 raid0 : 1.04 7857.60 30.69 0.00 0.00 15985.65 1081.34 20447.23 00:08:40.603 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 concat0 : 1.04 7849.80 30.66 0.00 0.00 15961.08 1074.79 19188.94 00:08:40.603 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 raid1 : 1.04 7839.98 30.62 0.00 0.00 15933.94 1703.94 17406.36 00:08:40.603 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:40.603 AIO0 : 1.05 7834.13 30.60 0.00 0.00 15893.38 704.51 17406.36 00:08:40.603 =================================================================================================================== 00:08:40.603 Total : 126115.90 492.64 0.00 0.00 16042.00 462.03 28730.98 00:08:40.862 00:08:40.862 real 0m1.952s 00:08:40.862 user 0m1.618s 00:08:40.862 sys 0m0.279s 00:08:40.862 10:16:05 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.862 10:16:05 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:40.862 ************************************ 00:08:40.862 END TEST bdev_write_zeroes 00:08:40.862 ************************************ 00:08:41.120 10:16:05 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:08:41.120 10:16:05 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.120 10:16:05 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:41.120 10:16:05 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.120 10:16:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:41.120 ************************************ 00:08:41.120 START TEST bdev_json_nonenclosed 00:08:41.120 ************************************ 00:08:41.120 10:16:05 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.120 [2024-07-15 10:16:05.773175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.120 [2024-07-15 10:16:05.773215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733073 ] 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.120 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:41.120 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:41.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:41.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:41.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:41.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:41.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:41.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.121 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:41.121 [2024-07-15 10:16:05.861458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.378 [2024-07-15 10:16:05.931160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.378 [2024-07-15 10:16:05.931220] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:41.378 [2024-07-15 10:16:05.931237] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:41.379 [2024-07-15 10:16:05.931247] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.379 00:08:41.379 real 0m0.277s 00:08:41.379 user 0m0.164s 00:08:41.379 sys 0m0.111s 00:08:41.379 10:16:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:41.379 10:16:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.379 10:16:06 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:41.379 ************************************ 00:08:41.379 END TEST bdev_json_nonenclosed 00:08:41.379 ************************************ 00:08:41.379 10:16:06 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:41.379 10:16:06 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:08:41.379 10:16:06 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.379 10:16:06 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:41.379 10:16:06 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.379 10:16:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:41.379 ************************************ 00:08:41.379 START TEST bdev_json_nonarray 00:08:41.379 ************************************ 00:08:41.379 10:16:06 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.379 [2024-07-15 10:16:06.126461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.379 [2024-07-15 10:16:06.126500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733208 ] 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:41.637 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.637 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:41.638 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.638 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:41.638 [2024-07-15 10:16:06.215252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.638 [2024-07-15 10:16:06.284708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.638 [2024-07-15 10:16:06.284776] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:41.638 [2024-07-15 10:16:06.284794] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:41.638 [2024-07-15 10:16:06.284804] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.638 00:08:41.638 real 0m0.280s 00:08:41.638 user 0m0.168s 00:08:41.638 sys 0m0.110s 00:08:41.638 10:16:06 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:41.638 10:16:06 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.638 10:16:06 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:41.638 ************************************ 00:08:41.638 END TEST bdev_json_nonarray 00:08:41.638 ************************************ 00:08:41.638 10:16:06 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:08:41.638 10:16:06 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:08:41.638 10:16:06 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:08:41.638 10:16:06 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:08:41.638 10:16:06 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.638 10:16:06 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.638 10:16:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:08:41.898 ************************************ 00:08:41.898 START TEST bdev_qos 00:08:41.898 ************************************ 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=1733233 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 1733233' 00:08:41.898 Process qos testing pid: 1733233 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 1733233 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 1733233 ']' 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.898 10:16:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:41.898 [2024-07-15 10:16:06.483671] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.898 [2024-07-15 10:16:06.483718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733233 ] 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:41.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.898 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:41.898 [2024-07-15 10:16:06.572991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.898 [2024-07-15 10:16:06.640348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 Malloc_0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 [ 00:08:42.838 { 00:08:42.838 "name": "Malloc_0", 00:08:42.838 "aliases": [ 00:08:42.838 "d81d58d6-dc3e-4bb1-b99e-c500320f9cb1" 00:08:42.838 ], 00:08:42.838 "product_name": "Malloc disk", 00:08:42.838 "block_size": 512, 00:08:42.838 "num_blocks": 262144, 00:08:42.838 "uuid": "d81d58d6-dc3e-4bb1-b99e-c500320f9cb1", 00:08:42.838 "assigned_rate_limits": { 00:08:42.838 "rw_ios_per_sec": 0, 00:08:42.838 "rw_mbytes_per_sec": 0, 00:08:42.838 "r_mbytes_per_sec": 0, 00:08:42.838 "w_mbytes_per_sec": 0 00:08:42.838 }, 00:08:42.838 "claimed": false, 00:08:42.838 "zoned": false, 00:08:42.838 "supported_io_types": { 00:08:42.838 "read": true, 00:08:42.838 "write": true, 00:08:42.838 "unmap": true, 00:08:42.838 "flush": true, 00:08:42.838 "reset": true, 00:08:42.838 "nvme_admin": false, 00:08:42.838 "nvme_io": false, 00:08:42.838 "nvme_io_md": false, 00:08:42.838 "write_zeroes": true, 00:08:42.838 "zcopy": true, 00:08:42.838 "get_zone_info": false, 00:08:42.838 "zone_management": false, 00:08:42.838 "zone_append": false, 00:08:42.838 "compare": false, 00:08:42.838 "compare_and_write": false, 00:08:42.838 "abort": true, 00:08:42.838 "seek_hole": false, 00:08:42.838 "seek_data": false, 00:08:42.838 "copy": true, 00:08:42.838 "nvme_iov_md": false 00:08:42.838 }, 00:08:42.838 "memory_domains": [ 00:08:42.838 { 00:08:42.838 "dma_device_id": "system", 00:08:42.838 "dma_device_type": 1 00:08:42.838 }, 00:08:42.838 { 00:08:42.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.838 "dma_device_type": 2 00:08:42.838 } 00:08:42.838 ], 00:08:42.838 "driver_specific": {} 00:08:42.838 } 00:08:42.838 ] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 Null_1 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 [ 00:08:42.838 { 00:08:42.838 "name": "Null_1", 00:08:42.838 "aliases": [ 00:08:42.838 "3bb7df83-37a7-412a-9f83-6223977c8d9c" 00:08:42.838 ], 00:08:42.838 "product_name": "Null disk", 00:08:42.838 "block_size": 512, 00:08:42.838 "num_blocks": 262144, 00:08:42.838 "uuid": "3bb7df83-37a7-412a-9f83-6223977c8d9c", 00:08:42.838 "assigned_rate_limits": { 00:08:42.838 "rw_ios_per_sec": 0, 00:08:42.838 "rw_mbytes_per_sec": 0, 00:08:42.838 "r_mbytes_per_sec": 0, 00:08:42.838 "w_mbytes_per_sec": 0 00:08:42.838 }, 00:08:42.838 "claimed": false, 00:08:42.838 "zoned": false, 00:08:42.838 "supported_io_types": { 00:08:42.838 "read": true, 00:08:42.838 "write": true, 00:08:42.838 "unmap": false, 00:08:42.838 "flush": false, 00:08:42.838 "reset": true, 00:08:42.838 "nvme_admin": false, 00:08:42.838 "nvme_io": false, 00:08:42.838 "nvme_io_md": false, 00:08:42.838 "write_zeroes": true, 00:08:42.838 "zcopy": false, 00:08:42.838 "get_zone_info": false, 00:08:42.838 "zone_management": false, 00:08:42.838 "zone_append": false, 00:08:42.838 "compare": false, 00:08:42.838 "compare_and_write": false, 00:08:42.838 "abort": true, 00:08:42.838 "seek_hole": false, 00:08:42.838 "seek_data": false, 00:08:42.838 "copy": false, 00:08:42.838 "nvme_iov_md": false 00:08:42.838 }, 00:08:42.838 "driver_specific": {} 00:08:42.838 } 00:08:42.838 ] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:42.838 10:16:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:42.838 Running I/O for 60 seconds... 00:08:48.112 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 102101.86 408407.43 0.00 0.00 411648.00 0.00 0.00 ' 00:08:48.112 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:48.112 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:48.112 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=102101.86 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 102101 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=102101 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=25000 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 25000 -gt 1000 ']' 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 25000 Malloc_0 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 25000 IOPS Malloc_0 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.113 10:16:12 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:48.113 ************************************ 00:08:48.113 START TEST bdev_qos_iops 00:08:48.113 ************************************ 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 25000 IOPS Malloc_0 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=25000 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:08:48.113 10:16:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 24988.07 99952.28 0.00 0.00 100900.00 0.00 0.00 ' 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=24988.07 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 24988 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=24988 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=22500 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=27500 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 24988 -lt 22500 ']' 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 24988 -gt 27500 ']' 00:08:53.383 00:08:53.383 real 0m5.180s 00:08:53.383 user 0m0.086s 00:08:53.383 sys 0m0.044s 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.383 10:16:17 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:08:53.383 ************************************ 00:08:53.383 END TEST bdev_qos_iops 00:08:53.383 ************************************ 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:08:53.383 10:16:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 31346.42 125385.68 0.00 0.00 126976.00 0.00 0.00 ' 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=126976.00 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 126976 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=126976 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=12 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 12 -lt 2 ']' 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.662 10:16:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 ************************************ 00:08:58.662 START TEST bdev_qos_bw 00:08:58.662 ************************************ 00:08:58.662 10:16:22 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 12 BANDWIDTH Null_1 00:08:58.662 10:16:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=12 00:08:58.662 10:16:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:08:58.662 10:16:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:08:58.662 10:16:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:08:58.662 10:16:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:08:58.662 10:16:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:08:58.662 10:16:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:58.662 10:16:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:08:58.662 10:16:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3072.60 12290.41 0.00 0.00 12424.00 0.00 0.00 ' 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=12424.00 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 12424 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=12424 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=12288 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11059 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=13516 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12424 -lt 11059 ']' 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12424 -gt 13516 ']' 00:09:03.958 00:09:03.958 real 0m5.184s 00:09:03.958 user 0m0.079s 00:09:03.958 sys 0m0.052s 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:09:03.958 ************************************ 00:09:03.958 END TEST bdev_qos_bw 00:09:03.958 ************************************ 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.958 10:16:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:03.958 ************************************ 00:09:03.958 START TEST bdev_qos_ro_bw 00:09:03.958 ************************************ 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:09:03.958 10:16:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.33 2045.32 0.00 0.00 2056.00 0.00 0.00 ' 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2056.00 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2056 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2056 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2056 -lt 1843 ']' 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2056 -gt 2252 ']' 00:09:09.220 00:09:09.220 real 0m5.147s 00:09:09.220 user 0m0.075s 00:09:09.220 sys 0m0.046s 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.220 10:16:33 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:09:09.220 ************************************ 00:09:09.220 END TEST bdev_qos_ro_bw 00:09:09.220 ************************************ 00:09:09.220 10:16:33 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:09:09.220 10:16:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:09.220 10:16:33 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.220 10:16:33 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:09.478 00:09:09.478 Latency(us) 00:09:09.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.478 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:09.478 Malloc_0 : 26.49 34090.98 133.17 0.00 0.00 7435.05 1356.60 503316.48 00:09:09.478 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:09.478 Null_1 : 26.60 32932.18 128.64 0.00 0.00 7758.36 537.40 102760.45 00:09:09.478 =================================================================================================================== 00:09:09.478 Total : 67023.15 261.81 0.00 0.00 7594.23 537.40 503316.48 00:09:09.478 0 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 1733233 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 1733233 ']' 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 1733233 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1733233 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1733233' 00:09:09.478 killing process with pid 1733233 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 1733233 00:09:09.478 Received shutdown signal, test time was about 26.658227 seconds 00:09:09.478 00:09:09.478 Latency(us) 00:09:09.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.478 =================================================================================================================== 00:09:09.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:09.478 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 1733233 00:09:09.736 10:16:34 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:09:09.736 00:09:09.736 real 0m27.896s 00:09:09.736 user 0m28.406s 00:09:09.736 sys 0m0.765s 00:09:09.736 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.736 10:16:34 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:09:09.736 ************************************ 00:09:09.736 END TEST bdev_qos 00:09:09.736 ************************************ 00:09:09.736 10:16:34 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:09.736 10:16:34 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:09.736 10:16:34 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.736 10:16:34 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.736 10:16:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:09.736 ************************************ 00:09:09.736 START TEST bdev_qd_sampling 00:09:09.736 ************************************ 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=1738087 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 1738087' 00:09:09.736 Process bdev QD sampling period testing pid: 1738087 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 1738087 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 1738087 ']' 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.736 10:16:34 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:09.736 [2024-07-15 10:16:34.446404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:09.736 [2024-07-15 10:16:34.446444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738087 ] 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:09.736 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.736 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:09.994 [2024-07-15 10:16:34.537595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.994 [2024-07-15 10:16:34.611417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.994 [2024-07-15 10:16:34.611420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:10.560 Malloc_QD 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:09:10.560 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:10.561 [ 00:09:10.561 { 00:09:10.561 "name": "Malloc_QD", 00:09:10.561 "aliases": [ 00:09:10.561 "bce08e0f-180b-4096-a7c0-c276603d96de" 00:09:10.561 ], 00:09:10.561 "product_name": "Malloc disk", 00:09:10.561 "block_size": 512, 00:09:10.561 "num_blocks": 262144, 00:09:10.561 "uuid": "bce08e0f-180b-4096-a7c0-c276603d96de", 00:09:10.561 "assigned_rate_limits": { 00:09:10.561 "rw_ios_per_sec": 0, 00:09:10.561 "rw_mbytes_per_sec": 0, 00:09:10.561 "r_mbytes_per_sec": 0, 00:09:10.561 "w_mbytes_per_sec": 0 00:09:10.561 }, 00:09:10.561 "claimed": false, 00:09:10.561 "zoned": false, 00:09:10.561 "supported_io_types": { 00:09:10.561 "read": true, 00:09:10.561 "write": true, 00:09:10.561 "unmap": true, 00:09:10.561 "flush": true, 00:09:10.561 "reset": true, 00:09:10.561 "nvme_admin": false, 00:09:10.561 "nvme_io": false, 00:09:10.561 "nvme_io_md": false, 00:09:10.561 "write_zeroes": true, 00:09:10.561 "zcopy": true, 00:09:10.561 "get_zone_info": false, 00:09:10.561 "zone_management": false, 00:09:10.561 "zone_append": false, 00:09:10.561 "compare": false, 00:09:10.561 "compare_and_write": false, 00:09:10.561 "abort": true, 00:09:10.561 "seek_hole": false, 00:09:10.561 "seek_data": false, 00:09:10.561 "copy": true, 00:09:10.561 "nvme_iov_md": false 00:09:10.561 }, 00:09:10.561 "memory_domains": [ 00:09:10.561 { 00:09:10.561 "dma_device_id": "system", 00:09:10.561 "dma_device_type": 1 00:09:10.561 }, 00:09:10.561 { 00:09:10.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.561 "dma_device_type": 2 00:09:10.561 } 00:09:10.561 ], 00:09:10.561 "driver_specific": {} 00:09:10.561 } 00:09:10.561 ] 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:09:10.561 10:16:35 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.819 Running I/O for 5 seconds... 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:09:12.741 "tick_rate": 2500000000, 00:09:12.741 "ticks": 12083136354155708, 00:09:12.741 "bdevs": [ 00:09:12.741 { 00:09:12.741 "name": "Malloc_QD", 00:09:12.741 "bytes_read": 1054913024, 00:09:12.741 "num_read_ops": 257540, 00:09:12.741 "bytes_written": 0, 00:09:12.741 "num_write_ops": 0, 00:09:12.741 "bytes_unmapped": 0, 00:09:12.741 "num_unmap_ops": 0, 00:09:12.741 "bytes_copied": 0, 00:09:12.741 "num_copy_ops": 0, 00:09:12.741 "read_latency_ticks": 2484448512992, 00:09:12.741 "max_read_latency_ticks": 11970636, 00:09:12.741 "min_read_latency_ticks": 189218, 00:09:12.741 "write_latency_ticks": 0, 00:09:12.741 "max_write_latency_ticks": 0, 00:09:12.741 "min_write_latency_ticks": 0, 00:09:12.741 "unmap_latency_ticks": 0, 00:09:12.741 "max_unmap_latency_ticks": 0, 00:09:12.741 "min_unmap_latency_ticks": 0, 00:09:12.741 "copy_latency_ticks": 0, 00:09:12.741 "max_copy_latency_ticks": 0, 00:09:12.741 "min_copy_latency_ticks": 0, 00:09:12.741 "io_error": {}, 00:09:12.741 "queue_depth_polling_period": 10, 00:09:12.741 "queue_depth": 512, 00:09:12.741 "io_time": 40, 00:09:12.741 "weighted_io_time": 20480 00:09:12.741 } 00:09:12.741 ] 00:09:12.741 }' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:12.741 00:09:12.741 Latency(us) 00:09:12.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.741 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:12.741 Malloc_QD : 2.02 65985.17 257.75 0.00 0.00 3871.08 1402.47 4272.95 00:09:12.741 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:12.741 Malloc_QD : 2.02 66488.53 259.72 0.00 0.00 3842.03 1310.72 4797.24 00:09:12.741 =================================================================================================================== 00:09:12.741 Total : 132473.70 517.48 0.00 0.00 3856.50 1310.72 4797.24 00:09:12.741 0 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 1738087 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 1738087 ']' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 1738087 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1738087 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1738087' 00:09:12.741 killing process with pid 1738087 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 1738087 00:09:12.741 Received shutdown signal, test time was about 2.096673 seconds 00:09:12.741 00:09:12.741 Latency(us) 00:09:12.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.741 =================================================================================================================== 00:09:12.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.741 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 1738087 00:09:13.000 10:16:37 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:09:13.000 00:09:13.000 real 0m3.257s 00:09:13.000 user 0m6.405s 00:09:13.000 sys 0m0.365s 00:09:13.000 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.000 10:16:37 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:09:13.000 ************************************ 00:09:13.000 END TEST bdev_qd_sampling 00:09:13.000 ************************************ 00:09:13.000 10:16:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:13.000 10:16:37 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:09:13.000 10:16:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.000 10:16:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.000 10:16:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:13.000 ************************************ 00:09:13.000 START TEST bdev_error 00:09:13.000 ************************************ 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=1738651 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 1738651' 00:09:13.000 Process error testing pid: 1738651 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:13.000 10:16:37 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 1738651 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 1738651 ']' 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.000 10:16:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:13.258 [2024-07-15 10:16:37.806464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:13.258 [2024-07-15 10:16:37.806517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738651 ] 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:13.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.258 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:13.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.259 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:13.259 [2024-07-15 10:16:37.897379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.259 [2024-07-15 10:16:37.964580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.824 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.824 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:13.824 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:13.824 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.824 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 Dev_1 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 [ 00:09:14.083 { 00:09:14.083 "name": "Dev_1", 00:09:14.083 "aliases": [ 00:09:14.083 "77de7345-01c6-4a6e-a73c-d48ecfc90684" 00:09:14.083 ], 00:09:14.083 "product_name": "Malloc disk", 00:09:14.083 "block_size": 512, 00:09:14.083 "num_blocks": 262144, 00:09:14.083 "uuid": "77de7345-01c6-4a6e-a73c-d48ecfc90684", 00:09:14.083 "assigned_rate_limits": { 00:09:14.083 "rw_ios_per_sec": 0, 00:09:14.083 "rw_mbytes_per_sec": 0, 00:09:14.083 "r_mbytes_per_sec": 0, 00:09:14.083 "w_mbytes_per_sec": 0 00:09:14.083 }, 00:09:14.083 "claimed": false, 00:09:14.083 "zoned": false, 00:09:14.083 "supported_io_types": { 00:09:14.083 "read": true, 00:09:14.083 "write": true, 00:09:14.083 "unmap": true, 00:09:14.083 "flush": true, 00:09:14.083 "reset": true, 00:09:14.083 "nvme_admin": false, 00:09:14.083 "nvme_io": false, 00:09:14.083 "nvme_io_md": false, 00:09:14.083 "write_zeroes": true, 00:09:14.083 "zcopy": true, 00:09:14.083 "get_zone_info": false, 00:09:14.083 "zone_management": false, 00:09:14.083 "zone_append": false, 00:09:14.083 "compare": false, 00:09:14.083 "compare_and_write": false, 00:09:14.083 "abort": true, 00:09:14.083 "seek_hole": false, 00:09:14.083 "seek_data": false, 00:09:14.083 "copy": true, 00:09:14.083 "nvme_iov_md": false 00:09:14.083 }, 00:09:14.083 "memory_domains": [ 00:09:14.083 { 00:09:14.083 "dma_device_id": "system", 00:09:14.083 "dma_device_type": 1 00:09:14.083 }, 00:09:14.083 { 00:09:14.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.083 "dma_device_type": 2 00:09:14.083 } 00:09:14.083 ], 00:09:14.083 "driver_specific": {} 00:09:14.083 } 00:09:14.083 ] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 true 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 Dev_2 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 [ 00:09:14.083 { 00:09:14.083 "name": "Dev_2", 00:09:14.083 "aliases": [ 00:09:14.083 "12b03580-bc36-450c-8e4c-058407b49a05" 00:09:14.083 ], 00:09:14.083 "product_name": "Malloc disk", 00:09:14.083 "block_size": 512, 00:09:14.083 "num_blocks": 262144, 00:09:14.083 "uuid": "12b03580-bc36-450c-8e4c-058407b49a05", 00:09:14.083 "assigned_rate_limits": { 00:09:14.083 "rw_ios_per_sec": 0, 00:09:14.083 "rw_mbytes_per_sec": 0, 00:09:14.083 "r_mbytes_per_sec": 0, 00:09:14.083 "w_mbytes_per_sec": 0 00:09:14.083 }, 00:09:14.083 "claimed": false, 00:09:14.083 "zoned": false, 00:09:14.083 "supported_io_types": { 00:09:14.083 "read": true, 00:09:14.083 "write": true, 00:09:14.083 "unmap": true, 00:09:14.083 "flush": true, 00:09:14.083 "reset": true, 00:09:14.083 "nvme_admin": false, 00:09:14.083 "nvme_io": false, 00:09:14.083 "nvme_io_md": false, 00:09:14.083 "write_zeroes": true, 00:09:14.083 "zcopy": true, 00:09:14.083 "get_zone_info": false, 00:09:14.083 "zone_management": false, 00:09:14.083 "zone_append": false, 00:09:14.083 "compare": false, 00:09:14.083 "compare_and_write": false, 00:09:14.083 "abort": true, 00:09:14.083 "seek_hole": false, 00:09:14.083 "seek_data": false, 00:09:14.083 "copy": true, 00:09:14.083 "nvme_iov_md": false 00:09:14.083 }, 00:09:14.083 "memory_domains": [ 00:09:14.083 { 00:09:14.083 "dma_device_id": "system", 00:09:14.083 "dma_device_type": 1 00:09:14.083 }, 00:09:14.083 { 00:09:14.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.083 "dma_device_type": 2 00:09:14.083 } 00:09:14.083 ], 00:09:14.083 "driver_specific": {} 00:09:14.083 } 00:09:14.083 ] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:14.083 10:16:38 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:09:14.083 10:16:38 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:14.083 Running I/O for 5 seconds... 00:09:15.019 10:16:39 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 1738651 00:09:15.019 10:16:39 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 1738651' 00:09:15.019 Process is existed as continue on error is set. Pid: 1738651 00:09:15.019 10:16:39 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:15.019 10:16:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.019 10:16:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:15.019 10:16:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.019 10:16:39 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:15.019 10:16:39 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.019 10:16:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:15.019 10:16:39 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.019 10:16:39 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:09:15.278 Timeout while waiting for response: 00:09:15.278 00:09:15.278 00:09:19.465 00:09:19.465 Latency(us) 00:09:19.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.465 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:19.465 EE_Dev_1 : 0.93 61154.60 238.89 5.41 0.00 259.41 89.29 462.03 00:09:19.465 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:19.465 Dev_2 : 5.00 130413.58 509.43 0.00 0.00 120.52 40.35 18769.51 00:09:19.465 =================================================================================================================== 00:09:19.465 Total : 191568.17 748.31 5.41 0.00 131.61 40.35 18769.51 00:09:20.031 10:16:44 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 1738651 00:09:20.031 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 1738651 ']' 00:09:20.031 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 1738651 00:09:20.031 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:09:20.031 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.031 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1738651 00:09:20.290 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:20.290 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:20.290 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1738651' 00:09:20.290 killing process with pid 1738651 00:09:20.290 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 1738651 00:09:20.290 Received shutdown signal, test time was about 5.000000 seconds 00:09:20.290 00:09:20.290 Latency(us) 00:09:20.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.290 =================================================================================================================== 00:09:20.290 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:20.290 10:16:44 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 1738651 00:09:20.290 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=1739975 00:09:20.290 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 1739975' 00:09:20.290 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:20.290 Process error testing pid: 1739975 00:09:20.290 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 1739975 00:09:20.290 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 1739975 ']' 00:09:20.290 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.290 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.290 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.290 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.290 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:20.549 [2024-07-15 10:16:45.100542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:20.549 [2024-07-15 10:16:45.100593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739975 ] 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:20.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.549 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:20.549 [2024-07-15 10:16:45.191559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.549 [2024-07-15 10:16:45.264353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.128 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.128 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:09:21.128 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:21.128 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.128 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 Dev_1 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 [ 00:09:21.411 { 00:09:21.411 "name": "Dev_1", 00:09:21.411 "aliases": [ 00:09:21.411 "81ab7e4b-eff5-4373-a1f4-fc245072273d" 00:09:21.411 ], 00:09:21.411 "product_name": "Malloc disk", 00:09:21.411 "block_size": 512, 00:09:21.411 "num_blocks": 262144, 00:09:21.411 "uuid": "81ab7e4b-eff5-4373-a1f4-fc245072273d", 00:09:21.411 "assigned_rate_limits": { 00:09:21.411 "rw_ios_per_sec": 0, 00:09:21.411 "rw_mbytes_per_sec": 0, 00:09:21.411 "r_mbytes_per_sec": 0, 00:09:21.411 "w_mbytes_per_sec": 0 00:09:21.411 }, 00:09:21.411 "claimed": false, 00:09:21.411 "zoned": false, 00:09:21.411 "supported_io_types": { 00:09:21.411 "read": true, 00:09:21.411 "write": true, 00:09:21.411 "unmap": true, 00:09:21.411 "flush": true, 00:09:21.411 "reset": true, 00:09:21.411 "nvme_admin": false, 00:09:21.411 "nvme_io": false, 00:09:21.411 "nvme_io_md": false, 00:09:21.411 "write_zeroes": true, 00:09:21.411 "zcopy": true, 00:09:21.411 "get_zone_info": false, 00:09:21.411 "zone_management": false, 00:09:21.411 "zone_append": false, 00:09:21.411 "compare": false, 00:09:21.411 "compare_and_write": false, 00:09:21.411 "abort": true, 00:09:21.411 "seek_hole": false, 00:09:21.411 "seek_data": false, 00:09:21.411 "copy": true, 00:09:21.411 "nvme_iov_md": false 00:09:21.411 }, 00:09:21.411 "memory_domains": [ 00:09:21.411 { 00:09:21.411 "dma_device_id": "system", 00:09:21.411 "dma_device_type": 1 00:09:21.411 }, 00:09:21.411 { 00:09:21.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.411 "dma_device_type": 2 00:09:21.411 } 00:09:21.411 ], 00:09:21.411 "driver_specific": {} 00:09:21.411 } 00:09:21.411 ] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:21.411 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 true 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 Dev_2 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:45 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 [ 00:09:21.411 { 00:09:21.411 "name": "Dev_2", 00:09:21.411 "aliases": [ 00:09:21.411 "d7839b12-04ea-429e-8ae1-1ace536ac717" 00:09:21.411 ], 00:09:21.411 "product_name": "Malloc disk", 00:09:21.411 "block_size": 512, 00:09:21.411 "num_blocks": 262144, 00:09:21.411 "uuid": "d7839b12-04ea-429e-8ae1-1ace536ac717", 00:09:21.411 "assigned_rate_limits": { 00:09:21.411 "rw_ios_per_sec": 0, 00:09:21.411 "rw_mbytes_per_sec": 0, 00:09:21.411 "r_mbytes_per_sec": 0, 00:09:21.411 "w_mbytes_per_sec": 0 00:09:21.411 }, 00:09:21.411 "claimed": false, 00:09:21.411 "zoned": false, 00:09:21.411 "supported_io_types": { 00:09:21.411 "read": true, 00:09:21.411 "write": true, 00:09:21.411 "unmap": true, 00:09:21.411 "flush": true, 00:09:21.411 "reset": true, 00:09:21.411 "nvme_admin": false, 00:09:21.411 "nvme_io": false, 00:09:21.411 "nvme_io_md": false, 00:09:21.411 "write_zeroes": true, 00:09:21.411 "zcopy": true, 00:09:21.411 "get_zone_info": false, 00:09:21.411 "zone_management": false, 00:09:21.411 "zone_append": false, 00:09:21.411 "compare": false, 00:09:21.411 "compare_and_write": false, 00:09:21.411 "abort": true, 00:09:21.411 "seek_hole": false, 00:09:21.411 "seek_data": false, 00:09:21.411 "copy": true, 00:09:21.411 "nvme_iov_md": false 00:09:21.411 }, 00:09:21.411 "memory_domains": [ 00:09:21.411 { 00:09:21.411 "dma_device_id": "system", 00:09:21.411 "dma_device_type": 1 00:09:21.411 }, 00:09:21.411 { 00:09:21.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.411 "dma_device_type": 2 00:09:21.411 } 00:09:21.411 ], 00:09:21.411 "driver_specific": {} 00:09:21.411 } 00:09:21.411 ] 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:09:21.411 10:16:46 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.411 10:16:46 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 1739975 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1739975 00:09:21.411 10:16:46 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.411 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 1739975 00:09:21.411 Running I/O for 5 seconds... 00:09:21.411 task offset: 101224 on job bdev=EE_Dev_1 fails 00:09:21.411 00:09:21.411 Latency(us) 00:09:21.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.411 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:21.411 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:21.411 EE_Dev_1 : 0.00 45454.55 177.56 10330.58 0.00 238.65 87.24 425.98 00:09:21.411 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:21.411 Dev_2 : 0.00 28444.44 111.11 0.00 0.00 412.82 83.97 766.77 00:09:21.411 =================================================================================================================== 00:09:21.411 Total : 73898.99 288.67 10330.58 0.00 333.12 83.97 766.77 00:09:21.411 [2024-07-15 10:16:46.115164] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:21.411 request: 00:09:21.411 { 00:09:21.411 "method": "perform_tests", 00:09:21.411 "req_id": 1 00:09:21.411 } 00:09:21.411 Got JSON-RPC error response 00:09:21.411 response: 00:09:21.411 { 00:09:21.411 "code": -32603, 00:09:21.411 "message": "bdevperf failed with error Operation not permitted" 00:09:21.411 } 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.671 00:09:21.671 real 0m8.591s 00:09:21.671 user 0m8.819s 00:09:21.671 sys 0m0.727s 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.671 10:16:46 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:09:21.671 ************************************ 00:09:21.671 END TEST bdev_error 00:09:21.671 ************************************ 00:09:21.671 10:16:46 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:21.671 10:16:46 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:09:21.671 10:16:46 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.671 10:16:46 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.671 10:16:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:21.671 ************************************ 00:09:21.671 START TEST bdev_stat 00:09:21.671 ************************************ 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=1740260 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 1740260' 00:09:21.671 Process Bdev IO statistics testing pid: 1740260 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 1740260 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 1740260 ']' 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.671 10:16:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:21.671 [2024-07-15 10:16:46.447708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:21.671 [2024-07-15 10:16:46.447750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740260 ] 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:21.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:21.930 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:21.930 [2024-07-15 10:16:46.540041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.930 [2024-07-15 10:16:46.614747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.930 [2024-07-15 10:16:46.614750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.496 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.496 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:09:22.496 10:16:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:22.496 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.496 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.754 Malloc_STAT 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.754 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:22.754 [ 00:09:22.754 { 00:09:22.754 "name": "Malloc_STAT", 00:09:22.754 "aliases": [ 00:09:22.754 "1bbc1393-3983-4f18-bc98-ac744285b05f" 00:09:22.754 ], 00:09:22.754 "product_name": "Malloc disk", 00:09:22.754 "block_size": 512, 00:09:22.754 "num_blocks": 262144, 00:09:22.754 "uuid": "1bbc1393-3983-4f18-bc98-ac744285b05f", 00:09:22.754 "assigned_rate_limits": { 00:09:22.755 "rw_ios_per_sec": 0, 00:09:22.755 "rw_mbytes_per_sec": 0, 00:09:22.755 "r_mbytes_per_sec": 0, 00:09:22.755 "w_mbytes_per_sec": 0 00:09:22.755 }, 00:09:22.755 "claimed": false, 00:09:22.755 "zoned": false, 00:09:22.755 "supported_io_types": { 00:09:22.755 "read": true, 00:09:22.755 "write": true, 00:09:22.755 "unmap": true, 00:09:22.755 "flush": true, 00:09:22.755 "reset": true, 00:09:22.755 "nvme_admin": false, 00:09:22.755 "nvme_io": false, 00:09:22.755 "nvme_io_md": false, 00:09:22.755 "write_zeroes": true, 00:09:22.755 "zcopy": true, 00:09:22.755 "get_zone_info": false, 00:09:22.755 "zone_management": false, 00:09:22.755 "zone_append": false, 00:09:22.755 "compare": false, 00:09:22.755 "compare_and_write": false, 00:09:22.755 "abort": true, 00:09:22.755 "seek_hole": false, 00:09:22.755 "seek_data": false, 00:09:22.755 "copy": true, 00:09:22.755 "nvme_iov_md": false 00:09:22.755 }, 00:09:22.755 "memory_domains": [ 00:09:22.755 { 00:09:22.755 "dma_device_id": "system", 00:09:22.755 "dma_device_type": 1 00:09:22.755 }, 00:09:22.755 { 00:09:22.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.755 "dma_device_type": 2 00:09:22.755 } 00:09:22.755 ], 00:09:22.755 "driver_specific": {} 00:09:22.755 } 00:09:22.755 ] 00:09:22.755 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.755 10:16:47 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:09:22.755 10:16:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:09:22.755 10:16:47 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:22.755 Running I/O for 10 seconds... 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:09:24.654 "tick_rate": 2500000000, 00:09:24.654 "ticks": 12083166252924552, 00:09:24.654 "bdevs": [ 00:09:24.654 { 00:09:24.654 "name": "Malloc_STAT", 00:09:24.654 "bytes_read": 1027650048, 00:09:24.654 "num_read_ops": 250884, 00:09:24.654 "bytes_written": 0, 00:09:24.654 "num_write_ops": 0, 00:09:24.654 "bytes_unmapped": 0, 00:09:24.654 "num_unmap_ops": 0, 00:09:24.654 "bytes_copied": 0, 00:09:24.654 "num_copy_ops": 0, 00:09:24.654 "read_latency_ticks": 2456802563996, 00:09:24.654 "max_read_latency_ticks": 12038730, 00:09:24.654 "min_read_latency_ticks": 197090, 00:09:24.654 "write_latency_ticks": 0, 00:09:24.654 "max_write_latency_ticks": 0, 00:09:24.654 "min_write_latency_ticks": 0, 00:09:24.654 "unmap_latency_ticks": 0, 00:09:24.654 "max_unmap_latency_ticks": 0, 00:09:24.654 "min_unmap_latency_ticks": 0, 00:09:24.654 "copy_latency_ticks": 0, 00:09:24.654 "max_copy_latency_ticks": 0, 00:09:24.654 "min_copy_latency_ticks": 0, 00:09:24.654 "io_error": {} 00:09:24.654 } 00:09:24.654 ] 00:09:24.654 }' 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=250884 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:09:24.654 "tick_rate": 2500000000, 00:09:24.654 "ticks": 12083166404381210, 00:09:24.654 "name": "Malloc_STAT", 00:09:24.654 "channels": [ 00:09:24.654 { 00:09:24.654 "thread_id": 2, 00:09:24.654 "bytes_read": 526385152, 00:09:24.654 "num_read_ops": 128512, 00:09:24.654 "bytes_written": 0, 00:09:24.654 "num_write_ops": 0, 00:09:24.654 "bytes_unmapped": 0, 00:09:24.654 "num_unmap_ops": 0, 00:09:24.654 "bytes_copied": 0, 00:09:24.654 "num_copy_ops": 0, 00:09:24.654 "read_latency_ticks": 1266282660620, 00:09:24.654 "max_read_latency_ticks": 10768746, 00:09:24.654 "min_read_latency_ticks": 6430550, 00:09:24.654 "write_latency_ticks": 0, 00:09:24.654 "max_write_latency_ticks": 0, 00:09:24.654 "min_write_latency_ticks": 0, 00:09:24.654 "unmap_latency_ticks": 0, 00:09:24.654 "max_unmap_latency_ticks": 0, 00:09:24.654 "min_unmap_latency_ticks": 0, 00:09:24.654 "copy_latency_ticks": 0, 00:09:24.654 "max_copy_latency_ticks": 0, 00:09:24.654 "min_copy_latency_ticks": 0 00:09:24.654 }, 00:09:24.654 { 00:09:24.654 "thread_id": 3, 00:09:24.654 "bytes_read": 532676608, 00:09:24.654 "num_read_ops": 130048, 00:09:24.654 "bytes_written": 0, 00:09:24.654 "num_write_ops": 0, 00:09:24.654 "bytes_unmapped": 0, 00:09:24.654 "num_unmap_ops": 0, 00:09:24.654 "bytes_copied": 0, 00:09:24.654 "num_copy_ops": 0, 00:09:24.654 "read_latency_ticks": 1266872907600, 00:09:24.654 "max_read_latency_ticks": 12038730, 00:09:24.654 "min_read_latency_ticks": 6403062, 00:09:24.654 "write_latency_ticks": 0, 00:09:24.654 "max_write_latency_ticks": 0, 00:09:24.654 "min_write_latency_ticks": 0, 00:09:24.654 "unmap_latency_ticks": 0, 00:09:24.654 "max_unmap_latency_ticks": 0, 00:09:24.654 "min_unmap_latency_ticks": 0, 00:09:24.654 "copy_latency_ticks": 0, 00:09:24.654 "max_copy_latency_ticks": 0, 00:09:24.654 "min_copy_latency_ticks": 0 00:09:24.654 } 00:09:24.654 ] 00:09:24.654 }' 00:09:24.654 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=128512 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=128512 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=130048 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=258560 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:09:24.913 "tick_rate": 2500000000, 00:09:24.913 "ticks": 12083166673559964, 00:09:24.913 "bdevs": [ 00:09:24.913 { 00:09:24.913 "name": "Malloc_STAT", 00:09:24.913 "bytes_read": 1115730432, 00:09:24.913 "num_read_ops": 272388, 00:09:24.913 "bytes_written": 0, 00:09:24.913 "num_write_ops": 0, 00:09:24.913 "bytes_unmapped": 0, 00:09:24.913 "num_unmap_ops": 0, 00:09:24.913 "bytes_copied": 0, 00:09:24.913 "num_copy_ops": 0, 00:09:24.913 "read_latency_ticks": 2670458919508, 00:09:24.913 "max_read_latency_ticks": 12038730, 00:09:24.913 "min_read_latency_ticks": 197090, 00:09:24.913 "write_latency_ticks": 0, 00:09:24.913 "max_write_latency_ticks": 0, 00:09:24.913 "min_write_latency_ticks": 0, 00:09:24.913 "unmap_latency_ticks": 0, 00:09:24.913 "max_unmap_latency_ticks": 0, 00:09:24.913 "min_unmap_latency_ticks": 0, 00:09:24.913 "copy_latency_ticks": 0, 00:09:24.913 "max_copy_latency_ticks": 0, 00:09:24.913 "min_copy_latency_ticks": 0, 00:09:24.913 "io_error": {} 00:09:24.913 } 00:09:24.913 ] 00:09:24.913 }' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=272388 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 258560 -lt 250884 ']' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 258560 -gt 272388 ']' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:24.913 00:09:24.913 Latency(us) 00:09:24.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.913 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:24.913 Malloc_STAT : 2.16 64809.11 253.16 0.00 0.00 3941.73 1028.92 4325.38 00:09:24.913 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:24.913 Malloc_STAT : 2.16 65595.61 256.23 0.00 0.00 3894.90 642.25 4823.45 00:09:24.913 =================================================================================================================== 00:09:24.913 Total : 130404.71 509.39 0.00 0.00 3918.17 642.25 4823.45 00:09:24.913 0 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 1740260 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 1740260 ']' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 1740260 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1740260 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1740260' 00:09:24.913 killing process with pid 1740260 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 1740260 00:09:24.913 Received shutdown signal, test time was about 2.239294 seconds 00:09:24.913 00:09:24.913 Latency(us) 00:09:24.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.913 =================================================================================================================== 00:09:24.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:24.913 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 1740260 00:09:25.172 10:16:49 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:09:25.172 00:09:25.172 real 0m3.409s 00:09:25.172 user 0m6.841s 00:09:25.172 sys 0m0.387s 00:09:25.172 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.172 10:16:49 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:09:25.172 ************************************ 00:09:25.172 END TEST bdev_stat 00:09:25.172 ************************************ 00:09:25.172 10:16:49 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:09:25.172 10:16:49 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:09:25.172 00:09:25.172 real 1m45.767s 00:09:25.172 user 7m7.773s 00:09:25.172 sys 0m18.316s 00:09:25.172 10:16:49 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.172 10:16:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:25.172 ************************************ 00:09:25.172 END TEST blockdev_general 00:09:25.172 ************************************ 00:09:25.172 10:16:49 -- common/autotest_common.sh@1142 -- # return 0 00:09:25.172 10:16:49 -- spdk/autotest.sh@190 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:25.172 10:16:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.172 10:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.172 10:16:49 -- common/autotest_common.sh@10 -- # set +x 00:09:25.431 ************************************ 00:09:25.431 START TEST bdev_raid 00:09:25.431 ************************************ 00:09:25.431 10:16:49 bdev_raid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:25.431 * Looking for test storage... 00:09:25.431 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:25.431 10:16:50 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:09:25.431 10:16:50 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:25.431 10:16:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:25.431 10:16:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.431 10:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.431 ************************************ 00:09:25.431 START TEST raid_function_test_raid0 00:09:25.431 ************************************ 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=1740889 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1740889' 00:09:25.431 Process raid pid: 1740889 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 1740889 /var/tmp/spdk-raid.sock 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 1740889 ']' 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:25.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.431 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:25.431 [2024-07-15 10:16:50.200418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:25.431 [2024-07-15 10:16:50.200465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:25.690 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:25.690 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:25.690 [2024-07-15 10:16:50.294829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.690 [2024-07-15 10:16:50.366061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.690 [2024-07-15 10:16:50.419435] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.690 [2024-07-15 10:16:50.419460] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.258 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.258 10:16:50 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:09:26.258 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:09:26.258 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:09:26.258 10:16:50 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:26.258 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:09:26.258 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:26.517 [2024-07-15 10:16:51.182325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:26.517 [2024-07-15 10:16:51.183293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:26.517 [2024-07-15 10:16:51.183337] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28bca50 00:09:26.517 [2024-07-15 10:16:51.183344] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:26.517 [2024-07-15 10:16:51.183470] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x271fd00 00:09:26.517 [2024-07-15 10:16:51.183552] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28bca50 00:09:26.517 [2024-07-15 10:16:51.183558] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x28bca50 00:09:26.517 [2024-07-15 10:16:51.183629] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.517 Base_1 00:09:26.517 Base_2 00:09:26.517 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:26.517 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.517 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:26.775 [2024-07-15 10:16:51.535235] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26ffb30 00:09:26.775 /dev/nbd0 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:26.775 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.032 1+0 records in 00:09:27.032 1+0 records out 00:09:27.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225857 s, 18.1 MB/s 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:27.032 { 00:09:27.032 "nbd_device": "/dev/nbd0", 00:09:27.032 "bdev_name": "raid" 00:09:27.032 } 00:09:27.032 ]' 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:27.032 { 00:09:27.032 "nbd_device": "/dev/nbd0", 00:09:27.032 "bdev_name": "raid" 00:09:27.032 } 00:09:27.032 ]' 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:27.032 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:27.290 4096+0 records in 00:09:27.290 4096+0 records out 00:09:27.290 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0287497 s, 72.9 MB/s 00:09:27.290 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:27.290 4096+0 records in 00:09:27.290 4096+0 records out 00:09:27.290 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.139324 s, 15.1 MB/s 00:09:27.290 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:27.290 10:16:51 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:27.290 128+0 records in 00:09:27.290 128+0 records out 00:09:27.290 65536 bytes (66 kB, 64 KiB) copied, 0.000807791 s, 81.1 MB/s 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:27.290 2035+0 records in 00:09:27.290 2035+0 records out 00:09:27.290 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0111954 s, 93.1 MB/s 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:27.290 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:27.548 456+0 records in 00:09:27.548 456+0 records out 00:09:27.548 233472 bytes (233 kB, 228 KiB) copied, 0.00269803 s, 86.5 MB/s 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:27.548 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:27.549 [2024-07-15 10:16:52.292526] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:27.549 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 1740889 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 1740889 ']' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 1740889 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1740889 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1740889' 00:09:27.805 killing process with pid 1740889 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 1740889 00:09:27.805 [2024-07-15 10:16:52.579673] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.805 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 1740889 00:09:27.805 [2024-07-15 10:16:52.579726] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.806 [2024-07-15 10:16:52.579757] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.806 [2024-07-15 10:16:52.579765] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28bca50 name raid, state offline 00:09:28.063 [2024-07-15 10:16:52.594929] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.063 10:16:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:09:28.063 00:09:28.063 real 0m2.621s 00:09:28.063 user 0m3.365s 00:09:28.063 sys 0m0.999s 00:09:28.063 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.063 10:16:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:28.063 ************************************ 00:09:28.063 END TEST raid_function_test_raid0 00:09:28.063 ************************************ 00:09:28.063 10:16:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:28.063 10:16:52 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:09:28.063 10:16:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.063 10:16:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.063 10:16:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.063 ************************************ 00:09:28.063 START TEST raid_function_test_concat 00:09:28.063 ************************************ 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=1741495 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1741495' 00:09:28.063 Process raid pid: 1741495 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 1741495 /var/tmp/spdk-raid.sock 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 1741495 ']' 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:28.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:28.063 10:16:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:28.320 [2024-07-15 10:16:52.890265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.320 [2024-07-15 10:16:52.890305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:28.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:28.320 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:28.320 [2024-07-15 10:16:52.979470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.320 [2024-07-15 10:16:53.048614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.577 [2024-07-15 10:16:53.109703] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.577 [2024-07-15 10:16:53.109725] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:29.141 [2024-07-15 10:16:53.864824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:29.141 [2024-07-15 10:16:53.865780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:29.141 [2024-07-15 10:16:53.865820] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2797a50 00:09:29.141 [2024-07-15 10:16:53.865827] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:29.141 [2024-07-15 10:16:53.865956] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25fad00 00:09:29.141 [2024-07-15 10:16:53.866034] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2797a50 00:09:29.141 [2024-07-15 10:16:53.866040] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x2797a50 00:09:29.141 [2024-07-15 10:16:53.866104] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.141 Base_1 00:09:29.141 Base_2 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:29.141 10:16:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:29.408 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:29.665 [2024-07-15 10:16:54.217773] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25da9b0 00:09:29.665 /dev/nbd0 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.665 1+0 records in 00:09:29.665 1+0 records out 00:09:29.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260649 s, 15.7 MB/s 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:29.665 { 00:09:29.665 "nbd_device": "/dev/nbd0", 00:09:29.665 "bdev_name": "raid" 00:09:29.665 } 00:09:29.665 ]' 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:29.665 { 00:09:29.665 "nbd_device": "/dev/nbd0", 00:09:29.665 "bdev_name": "raid" 00:09:29.665 } 00:09:29.665 ]' 00:09:29.665 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:29.928 4096+0 records in 00:09:29.928 4096+0 records out 00:09:29.928 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0287183 s, 73.0 MB/s 00:09:29.928 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:30.198 4096+0 records in 00:09:30.198 4096+0 records out 00:09:30.198 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.195236 s, 10.7 MB/s 00:09:30.198 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:09:30.198 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:30.198 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:09:30.198 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:30.198 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:30.199 128+0 records in 00:09:30.199 128+0 records out 00:09:30.199 65536 bytes (66 kB, 64 KiB) copied, 0.000641432 s, 102 MB/s 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:30.199 2035+0 records in 00:09:30.199 2035+0 records out 00:09:30.199 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0116153 s, 89.7 MB/s 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:30.199 456+0 records in 00:09:30.199 456+0 records out 00:09:30.199 233472 bytes (233 kB, 228 KiB) copied, 0.0027176 s, 85.9 MB/s 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.199 10:16:54 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:30.454 [2024-07-15 10:16:55.031667] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:30.454 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:30.455 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 1741495 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 1741495 ']' 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 1741495 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1741495 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1741495' 00:09:30.713 killing process with pid 1741495 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 1741495 00:09:30.713 [2024-07-15 10:16:55.314211] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.713 [2024-07-15 10:16:55.314259] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 1741495 00:09:30.713 [2024-07-15 10:16:55.314289] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.713 [2024-07-15 10:16:55.314297] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2797a50 name raid, state offline 00:09:30.713 [2024-07-15 10:16:55.329463] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:09:30.713 00:09:30.713 real 0m2.658s 00:09:30.713 user 0m3.334s 00:09:30.713 sys 0m1.049s 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.713 10:16:55 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:30.713 ************************************ 00:09:30.713 END TEST raid_function_test_concat 00:09:30.713 ************************************ 00:09:30.970 10:16:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:30.970 10:16:55 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:09:30.970 10:16:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.970 10:16:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.970 10:16:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.970 ************************************ 00:09:30.970 START TEST raid0_resize_test 00:09:30.970 ************************************ 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=1742036 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 1742036' 00:09:30.970 Process raid pid: 1742036 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 1742036 /var/tmp/spdk-raid.sock 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 1742036 ']' 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:30.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.970 10:16:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:30.970 [2024-07-15 10:16:55.624322] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:30.970 [2024-07-15 10:16:55.624367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.970 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:30.970 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:30.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:30.971 [2024-07-15 10:16:55.715678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.228 [2024-07-15 10:16:55.789422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.228 [2024-07-15 10:16:55.841180] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.228 [2024-07-15 10:16:55.841204] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.792 10:16:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.792 10:16:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:09:31.792 10:16:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:31.792 Base_1 00:09:32.050 10:16:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:32.050 Base_2 00:09:32.050 10:16:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:32.307 [2024-07-15 10:16:56.896945] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:32.308 [2024-07-15 10:16:56.897982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:32.308 [2024-07-15 10:16:56.898017] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x256ac80 00:09:32.308 [2024-07-15 10:16:56.898024] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:32.308 [2024-07-15 10:16:56.898158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20ae030 00:09:32.308 [2024-07-15 10:16:56.898224] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x256ac80 00:09:32.308 [2024-07-15 10:16:56.898230] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x256ac80 00:09:32.308 [2024-07-15 10:16:56.898300] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.308 10:16:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:32.308 [2024-07-15 10:16:57.065358] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:32.308 [2024-07-15 10:16:57.065370] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:32.308 true 00:09:32.308 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:32.308 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:09:32.565 [2024-07-15 10:16:57.237887] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.565 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:09:32.565 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:09:32.565 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:09:32.565 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:32.823 [2024-07-15 10:16:57.406229] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:32.823 [2024-07-15 10:16:57.406241] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:32.823 [2024-07-15 10:16:57.406258] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:32.823 true 00:09:32.823 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:09:32.824 [2024-07-15 10:16:57.574764] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 1742036 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 1742036 ']' 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 1742036 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.824 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742036 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742036' 00:09:33.082 killing process with pid 1742036 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 1742036 00:09:33.082 [2024-07-15 10:16:57.643691] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.082 [2024-07-15 10:16:57.643730] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.082 [2024-07-15 10:16:57.643758] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.082 [2024-07-15 10:16:57.643766] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x256ac80 name Raid, state offline 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 1742036 00:09:33.082 [2024-07-15 10:16:57.644820] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:09:33.082 00:09:33.082 real 0m2.227s 00:09:33.082 user 0m3.297s 00:09:33.082 sys 0m0.494s 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.082 10:16:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.082 ************************************ 00:09:33.082 END TEST raid0_resize_test 00:09:33.082 ************************************ 00:09:33.082 10:16:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:33.082 10:16:57 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:33.082 10:16:57 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:33.082 10:16:57 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:33.082 10:16:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:33.082 10:16:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.082 10:16:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.341 ************************************ 00:09:33.341 START TEST raid_state_function_test 00:09:33.341 ************************************ 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1742419 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1742419' 00:09:33.341 Process raid pid: 1742419 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1742419 /var/tmp/spdk-raid.sock 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1742419 ']' 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:33.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.341 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.341 [2024-07-15 10:16:57.950341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:33.341 [2024-07-15 10:16:57.950384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:33.341 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.341 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:33.341 [2024-07-15 10:16:58.042503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.341 [2024-07-15 10:16:58.116429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.599 [2024-07-15 10:16:58.169341] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.599 [2024-07-15 10:16:58.169361] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:34.167 [2024-07-15 10:16:58.892684] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.167 [2024-07-15 10:16:58.892719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.167 [2024-07-15 10:16:58.892726] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.167 [2024-07-15 10:16:58.892736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.167 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.461 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.461 "name": "Existed_Raid", 00:09:34.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.461 "strip_size_kb": 64, 00:09:34.461 "state": "configuring", 00:09:34.461 "raid_level": "raid0", 00:09:34.461 "superblock": false, 00:09:34.461 "num_base_bdevs": 2, 00:09:34.461 "num_base_bdevs_discovered": 0, 00:09:34.461 "num_base_bdevs_operational": 2, 00:09:34.461 "base_bdevs_list": [ 00:09:34.461 { 00:09:34.461 "name": "BaseBdev1", 00:09:34.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.461 "is_configured": false, 00:09:34.461 "data_offset": 0, 00:09:34.461 "data_size": 0 00:09:34.461 }, 00:09:34.461 { 00:09:34.461 "name": "BaseBdev2", 00:09:34.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.461 "is_configured": false, 00:09:34.461 "data_offset": 0, 00:09:34.461 "data_size": 0 00:09:34.461 } 00:09:34.461 ] 00:09:34.461 }' 00:09:34.461 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.461 10:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.033 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:35.033 [2024-07-15 10:16:59.710738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.033 [2024-07-15 10:16:59.710763] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x860f20 name Existed_Raid, state configuring 00:09:35.033 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:35.291 [2024-07-15 10:16:59.863118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.291 [2024-07-15 10:16:59.863140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.291 [2024-07-15 10:16:59.863146] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.291 [2024-07-15 10:16:59.863153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.291 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.291 [2024-07-15 10:17:00.044101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.291 BaseBdev1 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:35.291 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:35.546 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.802 [ 00:09:35.802 { 00:09:35.802 "name": "BaseBdev1", 00:09:35.802 "aliases": [ 00:09:35.802 "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202" 00:09:35.802 ], 00:09:35.802 "product_name": "Malloc disk", 00:09:35.802 "block_size": 512, 00:09:35.802 "num_blocks": 65536, 00:09:35.802 "uuid": "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202", 00:09:35.802 "assigned_rate_limits": { 00:09:35.802 "rw_ios_per_sec": 0, 00:09:35.802 "rw_mbytes_per_sec": 0, 00:09:35.802 "r_mbytes_per_sec": 0, 00:09:35.802 "w_mbytes_per_sec": 0 00:09:35.802 }, 00:09:35.802 "claimed": true, 00:09:35.802 "claim_type": "exclusive_write", 00:09:35.802 "zoned": false, 00:09:35.802 "supported_io_types": { 00:09:35.802 "read": true, 00:09:35.802 "write": true, 00:09:35.802 "unmap": true, 00:09:35.802 "flush": true, 00:09:35.802 "reset": true, 00:09:35.802 "nvme_admin": false, 00:09:35.802 "nvme_io": false, 00:09:35.802 "nvme_io_md": false, 00:09:35.802 "write_zeroes": true, 00:09:35.802 "zcopy": true, 00:09:35.802 "get_zone_info": false, 00:09:35.802 "zone_management": false, 00:09:35.802 "zone_append": false, 00:09:35.802 "compare": false, 00:09:35.802 "compare_and_write": false, 00:09:35.802 "abort": true, 00:09:35.802 "seek_hole": false, 00:09:35.802 "seek_data": false, 00:09:35.802 "copy": true, 00:09:35.802 "nvme_iov_md": false 00:09:35.802 }, 00:09:35.802 "memory_domains": [ 00:09:35.802 { 00:09:35.802 "dma_device_id": "system", 00:09:35.802 "dma_device_type": 1 00:09:35.802 }, 00:09:35.802 { 00:09:35.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.802 "dma_device_type": 2 00:09:35.802 } 00:09:35.802 ], 00:09:35.802 "driver_specific": {} 00:09:35.802 } 00:09:35.802 ] 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:35.802 "name": "Existed_Raid", 00:09:35.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.802 "strip_size_kb": 64, 00:09:35.802 "state": "configuring", 00:09:35.802 "raid_level": "raid0", 00:09:35.802 "superblock": false, 00:09:35.802 "num_base_bdevs": 2, 00:09:35.802 "num_base_bdevs_discovered": 1, 00:09:35.802 "num_base_bdevs_operational": 2, 00:09:35.802 "base_bdevs_list": [ 00:09:35.802 { 00:09:35.802 "name": "BaseBdev1", 00:09:35.802 "uuid": "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202", 00:09:35.802 "is_configured": true, 00:09:35.802 "data_offset": 0, 00:09:35.802 "data_size": 65536 00:09:35.802 }, 00:09:35.802 { 00:09:35.802 "name": "BaseBdev2", 00:09:35.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.802 "is_configured": false, 00:09:35.802 "data_offset": 0, 00:09:35.802 "data_size": 0 00:09:35.802 } 00:09:35.802 ] 00:09:35.802 }' 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:35.802 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.364 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:36.621 [2024-07-15 10:17:01.231150] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:36.621 [2024-07-15 10:17:01.231183] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x860810 name Existed_Raid, state configuring 00:09:36.621 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:36.621 [2024-07-15 10:17:01.403621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:36.621 [2024-07-15 10:17:01.404681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.621 [2024-07-15 10:17:01.404710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.879 "name": "Existed_Raid", 00:09:36.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.879 "strip_size_kb": 64, 00:09:36.879 "state": "configuring", 00:09:36.879 "raid_level": "raid0", 00:09:36.879 "superblock": false, 00:09:36.879 "num_base_bdevs": 2, 00:09:36.879 "num_base_bdevs_discovered": 1, 00:09:36.879 "num_base_bdevs_operational": 2, 00:09:36.879 "base_bdevs_list": [ 00:09:36.879 { 00:09:36.879 "name": "BaseBdev1", 00:09:36.879 "uuid": "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202", 00:09:36.879 "is_configured": true, 00:09:36.879 "data_offset": 0, 00:09:36.879 "data_size": 65536 00:09:36.879 }, 00:09:36.879 { 00:09:36.879 "name": "BaseBdev2", 00:09:36.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.879 "is_configured": false, 00:09:36.879 "data_offset": 0, 00:09:36.879 "data_size": 0 00:09:36.879 } 00:09:36.879 ] 00:09:36.879 }' 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.879 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.444 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.701 [2024-07-15 10:17:02.244439] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.701 [2024-07-15 10:17:02.244468] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x861600 00:09:37.701 [2024-07-15 10:17:02.244474] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:37.701 [2024-07-15 10:17:02.244600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x862ef0 00:09:37.701 [2024-07-15 10:17:02.244682] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x861600 00:09:37.701 [2024-07-15 10:17:02.244688] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x861600 00:09:37.701 [2024-07-15 10:17:02.244801] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.701 BaseBdev2 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:37.701 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.959 [ 00:09:37.959 { 00:09:37.959 "name": "BaseBdev2", 00:09:37.959 "aliases": [ 00:09:37.959 "db135704-f5c8-4150-81ae-9fdd27c91158" 00:09:37.959 ], 00:09:37.959 "product_name": "Malloc disk", 00:09:37.959 "block_size": 512, 00:09:37.959 "num_blocks": 65536, 00:09:37.959 "uuid": "db135704-f5c8-4150-81ae-9fdd27c91158", 00:09:37.959 "assigned_rate_limits": { 00:09:37.959 "rw_ios_per_sec": 0, 00:09:37.959 "rw_mbytes_per_sec": 0, 00:09:37.959 "r_mbytes_per_sec": 0, 00:09:37.959 "w_mbytes_per_sec": 0 00:09:37.959 }, 00:09:37.959 "claimed": true, 00:09:37.959 "claim_type": "exclusive_write", 00:09:37.959 "zoned": false, 00:09:37.959 "supported_io_types": { 00:09:37.959 "read": true, 00:09:37.959 "write": true, 00:09:37.959 "unmap": true, 00:09:37.959 "flush": true, 00:09:37.959 "reset": true, 00:09:37.959 "nvme_admin": false, 00:09:37.959 "nvme_io": false, 00:09:37.959 "nvme_io_md": false, 00:09:37.959 "write_zeroes": true, 00:09:37.959 "zcopy": true, 00:09:37.959 "get_zone_info": false, 00:09:37.959 "zone_management": false, 00:09:37.959 "zone_append": false, 00:09:37.959 "compare": false, 00:09:37.959 "compare_and_write": false, 00:09:37.959 "abort": true, 00:09:37.959 "seek_hole": false, 00:09:37.959 "seek_data": false, 00:09:37.959 "copy": true, 00:09:37.959 "nvme_iov_md": false 00:09:37.959 }, 00:09:37.959 "memory_domains": [ 00:09:37.959 { 00:09:37.959 "dma_device_id": "system", 00:09:37.959 "dma_device_type": 1 00:09:37.960 }, 00:09:37.960 { 00:09:37.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.960 "dma_device_type": 2 00:09:37.960 } 00:09:37.960 ], 00:09:37.960 "driver_specific": {} 00:09:37.960 } 00:09:37.960 ] 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.960 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.219 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:38.219 "name": "Existed_Raid", 00:09:38.219 "uuid": "fb4112ce-c42f-48e0-a480-96ab61845ec6", 00:09:38.219 "strip_size_kb": 64, 00:09:38.219 "state": "online", 00:09:38.219 "raid_level": "raid0", 00:09:38.219 "superblock": false, 00:09:38.219 "num_base_bdevs": 2, 00:09:38.219 "num_base_bdevs_discovered": 2, 00:09:38.219 "num_base_bdevs_operational": 2, 00:09:38.219 "base_bdevs_list": [ 00:09:38.219 { 00:09:38.219 "name": "BaseBdev1", 00:09:38.219 "uuid": "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202", 00:09:38.219 "is_configured": true, 00:09:38.219 "data_offset": 0, 00:09:38.219 "data_size": 65536 00:09:38.219 }, 00:09:38.219 { 00:09:38.219 "name": "BaseBdev2", 00:09:38.219 "uuid": "db135704-f5c8-4150-81ae-9fdd27c91158", 00:09:38.219 "is_configured": true, 00:09:38.219 "data_offset": 0, 00:09:38.219 "data_size": 65536 00:09:38.219 } 00:09:38.219 ] 00:09:38.219 }' 00:09:38.219 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:38.219 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:38.479 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:38.738 [2024-07-15 10:17:03.415639] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.738 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:38.738 "name": "Existed_Raid", 00:09:38.738 "aliases": [ 00:09:38.738 "fb4112ce-c42f-48e0-a480-96ab61845ec6" 00:09:38.738 ], 00:09:38.738 "product_name": "Raid Volume", 00:09:38.738 "block_size": 512, 00:09:38.738 "num_blocks": 131072, 00:09:38.738 "uuid": "fb4112ce-c42f-48e0-a480-96ab61845ec6", 00:09:38.738 "assigned_rate_limits": { 00:09:38.738 "rw_ios_per_sec": 0, 00:09:38.738 "rw_mbytes_per_sec": 0, 00:09:38.738 "r_mbytes_per_sec": 0, 00:09:38.738 "w_mbytes_per_sec": 0 00:09:38.738 }, 00:09:38.738 "claimed": false, 00:09:38.738 "zoned": false, 00:09:38.738 "supported_io_types": { 00:09:38.738 "read": true, 00:09:38.738 "write": true, 00:09:38.738 "unmap": true, 00:09:38.738 "flush": true, 00:09:38.738 "reset": true, 00:09:38.738 "nvme_admin": false, 00:09:38.738 "nvme_io": false, 00:09:38.738 "nvme_io_md": false, 00:09:38.738 "write_zeroes": true, 00:09:38.738 "zcopy": false, 00:09:38.738 "get_zone_info": false, 00:09:38.738 "zone_management": false, 00:09:38.738 "zone_append": false, 00:09:38.738 "compare": false, 00:09:38.738 "compare_and_write": false, 00:09:38.738 "abort": false, 00:09:38.738 "seek_hole": false, 00:09:38.738 "seek_data": false, 00:09:38.738 "copy": false, 00:09:38.738 "nvme_iov_md": false 00:09:38.738 }, 00:09:38.738 "memory_domains": [ 00:09:38.738 { 00:09:38.738 "dma_device_id": "system", 00:09:38.738 "dma_device_type": 1 00:09:38.738 }, 00:09:38.738 { 00:09:38.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.738 "dma_device_type": 2 00:09:38.738 }, 00:09:38.738 { 00:09:38.738 "dma_device_id": "system", 00:09:38.738 "dma_device_type": 1 00:09:38.738 }, 00:09:38.738 { 00:09:38.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.738 "dma_device_type": 2 00:09:38.738 } 00:09:38.738 ], 00:09:38.738 "driver_specific": { 00:09:38.738 "raid": { 00:09:38.738 "uuid": "fb4112ce-c42f-48e0-a480-96ab61845ec6", 00:09:38.738 "strip_size_kb": 64, 00:09:38.738 "state": "online", 00:09:38.738 "raid_level": "raid0", 00:09:38.738 "superblock": false, 00:09:38.738 "num_base_bdevs": 2, 00:09:38.738 "num_base_bdevs_discovered": 2, 00:09:38.738 "num_base_bdevs_operational": 2, 00:09:38.738 "base_bdevs_list": [ 00:09:38.738 { 00:09:38.738 "name": "BaseBdev1", 00:09:38.738 "uuid": "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202", 00:09:38.738 "is_configured": true, 00:09:38.738 "data_offset": 0, 00:09:38.738 "data_size": 65536 00:09:38.738 }, 00:09:38.738 { 00:09:38.738 "name": "BaseBdev2", 00:09:38.738 "uuid": "db135704-f5c8-4150-81ae-9fdd27c91158", 00:09:38.738 "is_configured": true, 00:09:38.738 "data_offset": 0, 00:09:38.738 "data_size": 65536 00:09:38.738 } 00:09:38.739 ] 00:09:38.739 } 00:09:38.739 } 00:09:38.739 }' 00:09:38.739 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.739 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:38.739 BaseBdev2' 00:09:38.739 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:38.739 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:38.739 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:38.996 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:38.996 "name": "BaseBdev1", 00:09:38.996 "aliases": [ 00:09:38.996 "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202" 00:09:38.996 ], 00:09:38.996 "product_name": "Malloc disk", 00:09:38.996 "block_size": 512, 00:09:38.996 "num_blocks": 65536, 00:09:38.996 "uuid": "7ab4fa73-462e-4a96-a8d0-cd1bdc10d202", 00:09:38.996 "assigned_rate_limits": { 00:09:38.996 "rw_ios_per_sec": 0, 00:09:38.996 "rw_mbytes_per_sec": 0, 00:09:38.996 "r_mbytes_per_sec": 0, 00:09:38.996 "w_mbytes_per_sec": 0 00:09:38.996 }, 00:09:38.996 "claimed": true, 00:09:38.996 "claim_type": "exclusive_write", 00:09:38.996 "zoned": false, 00:09:38.996 "supported_io_types": { 00:09:38.996 "read": true, 00:09:38.996 "write": true, 00:09:38.996 "unmap": true, 00:09:38.996 "flush": true, 00:09:38.996 "reset": true, 00:09:38.996 "nvme_admin": false, 00:09:38.996 "nvme_io": false, 00:09:38.996 "nvme_io_md": false, 00:09:38.996 "write_zeroes": true, 00:09:38.996 "zcopy": true, 00:09:38.996 "get_zone_info": false, 00:09:38.996 "zone_management": false, 00:09:38.996 "zone_append": false, 00:09:38.996 "compare": false, 00:09:38.996 "compare_and_write": false, 00:09:38.996 "abort": true, 00:09:38.996 "seek_hole": false, 00:09:38.997 "seek_data": false, 00:09:38.997 "copy": true, 00:09:38.997 "nvme_iov_md": false 00:09:38.997 }, 00:09:38.997 "memory_domains": [ 00:09:38.997 { 00:09:38.997 "dma_device_id": "system", 00:09:38.997 "dma_device_type": 1 00:09:38.997 }, 00:09:38.997 { 00:09:38.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.997 "dma_device_type": 2 00:09:38.997 } 00:09:38.997 ], 00:09:38.997 "driver_specific": {} 00:09:38.997 }' 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:38.997 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:39.254 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:39.511 "name": "BaseBdev2", 00:09:39.511 "aliases": [ 00:09:39.511 "db135704-f5c8-4150-81ae-9fdd27c91158" 00:09:39.511 ], 00:09:39.511 "product_name": "Malloc disk", 00:09:39.511 "block_size": 512, 00:09:39.511 "num_blocks": 65536, 00:09:39.511 "uuid": "db135704-f5c8-4150-81ae-9fdd27c91158", 00:09:39.511 "assigned_rate_limits": { 00:09:39.511 "rw_ios_per_sec": 0, 00:09:39.511 "rw_mbytes_per_sec": 0, 00:09:39.511 "r_mbytes_per_sec": 0, 00:09:39.511 "w_mbytes_per_sec": 0 00:09:39.511 }, 00:09:39.511 "claimed": true, 00:09:39.511 "claim_type": "exclusive_write", 00:09:39.511 "zoned": false, 00:09:39.511 "supported_io_types": { 00:09:39.511 "read": true, 00:09:39.511 "write": true, 00:09:39.511 "unmap": true, 00:09:39.511 "flush": true, 00:09:39.511 "reset": true, 00:09:39.511 "nvme_admin": false, 00:09:39.511 "nvme_io": false, 00:09:39.511 "nvme_io_md": false, 00:09:39.511 "write_zeroes": true, 00:09:39.511 "zcopy": true, 00:09:39.511 "get_zone_info": false, 00:09:39.511 "zone_management": false, 00:09:39.511 "zone_append": false, 00:09:39.511 "compare": false, 00:09:39.511 "compare_and_write": false, 00:09:39.511 "abort": true, 00:09:39.511 "seek_hole": false, 00:09:39.511 "seek_data": false, 00:09:39.511 "copy": true, 00:09:39.511 "nvme_iov_md": false 00:09:39.511 }, 00:09:39.511 "memory_domains": [ 00:09:39.511 { 00:09:39.511 "dma_device_id": "system", 00:09:39.511 "dma_device_type": 1 00:09:39.511 }, 00:09:39.511 { 00:09:39.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.511 "dma_device_type": 2 00:09:39.511 } 00:09:39.511 ], 00:09:39.511 "driver_specific": {} 00:09:39.511 }' 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.511 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:39.770 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:39.770 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.770 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:39.770 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:39.770 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:39.770 [2024-07-15 10:17:04.546392] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.770 [2024-07-15 10:17:04.546415] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.770 [2024-07-15 10:17:04.546443] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.029 "name": "Existed_Raid", 00:09:40.029 "uuid": "fb4112ce-c42f-48e0-a480-96ab61845ec6", 00:09:40.029 "strip_size_kb": 64, 00:09:40.029 "state": "offline", 00:09:40.029 "raid_level": "raid0", 00:09:40.029 "superblock": false, 00:09:40.029 "num_base_bdevs": 2, 00:09:40.029 "num_base_bdevs_discovered": 1, 00:09:40.029 "num_base_bdevs_operational": 1, 00:09:40.029 "base_bdevs_list": [ 00:09:40.029 { 00:09:40.029 "name": null, 00:09:40.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.029 "is_configured": false, 00:09:40.029 "data_offset": 0, 00:09:40.029 "data_size": 65536 00:09:40.029 }, 00:09:40.029 { 00:09:40.029 "name": "BaseBdev2", 00:09:40.029 "uuid": "db135704-f5c8-4150-81ae-9fdd27c91158", 00:09:40.029 "is_configured": true, 00:09:40.029 "data_offset": 0, 00:09:40.029 "data_size": 65536 00:09:40.029 } 00:09:40.029 ] 00:09:40.029 }' 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.029 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.597 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:40.856 [2024-07-15 10:17:05.529778] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.856 [2024-07-15 10:17:05.529819] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x861600 name Existed_Raid, state offline 00:09:40.856 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:40.856 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:40.856 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.856 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1742419 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1742419 ']' 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1742419 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1742419 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1742419' 00:09:41.115 killing process with pid 1742419 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1742419 00:09:41.115 [2024-07-15 10:17:05.784348] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.115 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1742419 00:09:41.115 [2024-07-15 10:17:05.785143] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.372 10:17:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:41.372 00:09:41.372 real 0m8.069s 00:09:41.372 user 0m14.147s 00:09:41.372 sys 0m1.628s 00:09:41.372 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:41.372 10:17:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.372 ************************************ 00:09:41.372 END TEST raid_state_function_test 00:09:41.372 ************************************ 00:09:41.372 10:17:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:41.372 10:17:05 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:41.372 10:17:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:41.372 10:17:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.372 10:17:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.372 ************************************ 00:09:41.372 START TEST raid_state_function_test_sb 00:09:41.372 ************************************ 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1743983 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1743983' 00:09:41.372 Process raid pid: 1743983 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1743983 /var/tmp/spdk-raid.sock 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1743983 ']' 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:41.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.372 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.372 [2024-07-15 10:17:06.093423] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:41.372 [2024-07-15 10:17:06.093468] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.372 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:41.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.372 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:41.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:41.373 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:41.630 [2024-07-15 10:17:06.186209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.631 [2024-07-15 10:17:06.261021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.631 [2024-07-15 10:17:06.317730] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.631 [2024-07-15 10:17:06.317757] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.197 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.197 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:42.197 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:42.455 [2024-07-15 10:17:07.028748] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.455 [2024-07-15 10:17:07.028778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.455 [2024-07-15 10:17:07.028785] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.455 [2024-07-15 10:17:07.028792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.455 "name": "Existed_Raid", 00:09:42.455 "uuid": "6628ba79-866f-492e-ac58-4edbbaf4142e", 00:09:42.455 "strip_size_kb": 64, 00:09:42.455 "state": "configuring", 00:09:42.455 "raid_level": "raid0", 00:09:42.455 "superblock": true, 00:09:42.455 "num_base_bdevs": 2, 00:09:42.455 "num_base_bdevs_discovered": 0, 00:09:42.455 "num_base_bdevs_operational": 2, 00:09:42.455 "base_bdevs_list": [ 00:09:42.455 { 00:09:42.455 "name": "BaseBdev1", 00:09:42.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.455 "is_configured": false, 00:09:42.455 "data_offset": 0, 00:09:42.455 "data_size": 0 00:09:42.455 }, 00:09:42.455 { 00:09:42.455 "name": "BaseBdev2", 00:09:42.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.455 "is_configured": false, 00:09:42.455 "data_offset": 0, 00:09:42.455 "data_size": 0 00:09:42.455 } 00:09:42.455 ] 00:09:42.455 }' 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.455 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.021 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:43.279 [2024-07-15 10:17:07.866825] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.279 [2024-07-15 10:17:07.866845] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb10f20 name Existed_Raid, state configuring 00:09:43.279 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:43.279 [2024-07-15 10:17:08.047306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.279 [2024-07-15 10:17:08.047325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.279 [2024-07-15 10:17:08.047331] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.279 [2024-07-15 10:17:08.047340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.279 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.537 [2024-07-15 10:17:08.215966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.537 BaseBdev1 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:43.537 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.796 [ 00:09:43.796 { 00:09:43.796 "name": "BaseBdev1", 00:09:43.796 "aliases": [ 00:09:43.796 "019d2938-a408-40fd-97a4-314657b0a936" 00:09:43.796 ], 00:09:43.796 "product_name": "Malloc disk", 00:09:43.796 "block_size": 512, 00:09:43.796 "num_blocks": 65536, 00:09:43.796 "uuid": "019d2938-a408-40fd-97a4-314657b0a936", 00:09:43.796 "assigned_rate_limits": { 00:09:43.796 "rw_ios_per_sec": 0, 00:09:43.796 "rw_mbytes_per_sec": 0, 00:09:43.796 "r_mbytes_per_sec": 0, 00:09:43.796 "w_mbytes_per_sec": 0 00:09:43.796 }, 00:09:43.796 "claimed": true, 00:09:43.796 "claim_type": "exclusive_write", 00:09:43.796 "zoned": false, 00:09:43.796 "supported_io_types": { 00:09:43.796 "read": true, 00:09:43.796 "write": true, 00:09:43.796 "unmap": true, 00:09:43.796 "flush": true, 00:09:43.796 "reset": true, 00:09:43.796 "nvme_admin": false, 00:09:43.796 "nvme_io": false, 00:09:43.796 "nvme_io_md": false, 00:09:43.796 "write_zeroes": true, 00:09:43.796 "zcopy": true, 00:09:43.796 "get_zone_info": false, 00:09:43.796 "zone_management": false, 00:09:43.796 "zone_append": false, 00:09:43.796 "compare": false, 00:09:43.796 "compare_and_write": false, 00:09:43.796 "abort": true, 00:09:43.796 "seek_hole": false, 00:09:43.796 "seek_data": false, 00:09:43.796 "copy": true, 00:09:43.796 "nvme_iov_md": false 00:09:43.796 }, 00:09:43.796 "memory_domains": [ 00:09:43.796 { 00:09:43.796 "dma_device_id": "system", 00:09:43.796 "dma_device_type": 1 00:09:43.796 }, 00:09:43.796 { 00:09:43.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.796 "dma_device_type": 2 00:09:43.796 } 00:09:43.796 ], 00:09:43.796 "driver_specific": {} 00:09:43.796 } 00:09:43.796 ] 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.796 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.054 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.054 "name": "Existed_Raid", 00:09:44.054 "uuid": "de7e104f-6159-44b3-ab0f-f0eea2350d87", 00:09:44.054 "strip_size_kb": 64, 00:09:44.054 "state": "configuring", 00:09:44.054 "raid_level": "raid0", 00:09:44.054 "superblock": true, 00:09:44.054 "num_base_bdevs": 2, 00:09:44.054 "num_base_bdevs_discovered": 1, 00:09:44.054 "num_base_bdevs_operational": 2, 00:09:44.054 "base_bdevs_list": [ 00:09:44.054 { 00:09:44.054 "name": "BaseBdev1", 00:09:44.054 "uuid": "019d2938-a408-40fd-97a4-314657b0a936", 00:09:44.054 "is_configured": true, 00:09:44.054 "data_offset": 2048, 00:09:44.054 "data_size": 63488 00:09:44.054 }, 00:09:44.054 { 00:09:44.054 "name": "BaseBdev2", 00:09:44.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.054 "is_configured": false, 00:09:44.054 "data_offset": 0, 00:09:44.054 "data_size": 0 00:09:44.054 } 00:09:44.054 ] 00:09:44.054 }' 00:09:44.054 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.054 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.620 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:44.620 [2024-07-15 10:17:09.334845] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.620 [2024-07-15 10:17:09.334875] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb10810 name Existed_Raid, state configuring 00:09:44.620 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:44.877 [2024-07-15 10:17:09.515351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.877 [2024-07-15 10:17:09.516429] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.877 [2024-07-15 10:17:09.516454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.877 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.133 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:45.133 "name": "Existed_Raid", 00:09:45.133 "uuid": "49326d6b-8490-46e4-886f-3c4e41c2bc63", 00:09:45.133 "strip_size_kb": 64, 00:09:45.133 "state": "configuring", 00:09:45.133 "raid_level": "raid0", 00:09:45.133 "superblock": true, 00:09:45.133 "num_base_bdevs": 2, 00:09:45.133 "num_base_bdevs_discovered": 1, 00:09:45.133 "num_base_bdevs_operational": 2, 00:09:45.133 "base_bdevs_list": [ 00:09:45.133 { 00:09:45.133 "name": "BaseBdev1", 00:09:45.133 "uuid": "019d2938-a408-40fd-97a4-314657b0a936", 00:09:45.133 "is_configured": true, 00:09:45.133 "data_offset": 2048, 00:09:45.133 "data_size": 63488 00:09:45.133 }, 00:09:45.133 { 00:09:45.133 "name": "BaseBdev2", 00:09:45.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.133 "is_configured": false, 00:09:45.133 "data_offset": 0, 00:09:45.133 "data_size": 0 00:09:45.133 } 00:09:45.133 ] 00:09:45.133 }' 00:09:45.133 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:45.133 10:17:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.696 [2024-07-15 10:17:10.372195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.696 [2024-07-15 10:17:10.372302] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb11600 00:09:45.696 [2024-07-15 10:17:10.372311] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:45.696 [2024-07-15 10:17:10.372428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb12840 00:09:45.696 [2024-07-15 10:17:10.372505] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb11600 00:09:45.696 [2024-07-15 10:17:10.372511] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb11600 00:09:45.696 [2024-07-15 10:17:10.372573] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.696 BaseBdev2 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:45.696 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:45.953 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.953 [ 00:09:45.953 { 00:09:45.953 "name": "BaseBdev2", 00:09:45.953 "aliases": [ 00:09:45.953 "d5e44ba8-6e24-4eeb-b2dc-3486e161f264" 00:09:45.953 ], 00:09:45.953 "product_name": "Malloc disk", 00:09:45.953 "block_size": 512, 00:09:45.953 "num_blocks": 65536, 00:09:45.953 "uuid": "d5e44ba8-6e24-4eeb-b2dc-3486e161f264", 00:09:45.953 "assigned_rate_limits": { 00:09:45.953 "rw_ios_per_sec": 0, 00:09:45.953 "rw_mbytes_per_sec": 0, 00:09:45.953 "r_mbytes_per_sec": 0, 00:09:45.953 "w_mbytes_per_sec": 0 00:09:45.953 }, 00:09:45.953 "claimed": true, 00:09:45.953 "claim_type": "exclusive_write", 00:09:45.953 "zoned": false, 00:09:45.953 "supported_io_types": { 00:09:45.953 "read": true, 00:09:45.953 "write": true, 00:09:45.954 "unmap": true, 00:09:45.954 "flush": true, 00:09:45.954 "reset": true, 00:09:45.954 "nvme_admin": false, 00:09:45.954 "nvme_io": false, 00:09:45.954 "nvme_io_md": false, 00:09:45.954 "write_zeroes": true, 00:09:45.954 "zcopy": true, 00:09:45.954 "get_zone_info": false, 00:09:45.954 "zone_management": false, 00:09:45.954 "zone_append": false, 00:09:45.954 "compare": false, 00:09:45.954 "compare_and_write": false, 00:09:45.954 "abort": true, 00:09:45.954 "seek_hole": false, 00:09:45.954 "seek_data": false, 00:09:45.954 "copy": true, 00:09:45.954 "nvme_iov_md": false 00:09:45.954 }, 00:09:45.954 "memory_domains": [ 00:09:45.954 { 00:09:45.954 "dma_device_id": "system", 00:09:45.954 "dma_device_type": 1 00:09:45.954 }, 00:09:45.954 { 00:09:45.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.954 "dma_device_type": 2 00:09:45.954 } 00:09:45.954 ], 00:09:45.954 "driver_specific": {} 00:09:45.954 } 00:09:45.954 ] 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.954 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.210 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:46.210 "name": "Existed_Raid", 00:09:46.210 "uuid": "49326d6b-8490-46e4-886f-3c4e41c2bc63", 00:09:46.210 "strip_size_kb": 64, 00:09:46.210 "state": "online", 00:09:46.210 "raid_level": "raid0", 00:09:46.210 "superblock": true, 00:09:46.210 "num_base_bdevs": 2, 00:09:46.210 "num_base_bdevs_discovered": 2, 00:09:46.210 "num_base_bdevs_operational": 2, 00:09:46.210 "base_bdevs_list": [ 00:09:46.210 { 00:09:46.210 "name": "BaseBdev1", 00:09:46.210 "uuid": "019d2938-a408-40fd-97a4-314657b0a936", 00:09:46.210 "is_configured": true, 00:09:46.210 "data_offset": 2048, 00:09:46.210 "data_size": 63488 00:09:46.210 }, 00:09:46.210 { 00:09:46.210 "name": "BaseBdev2", 00:09:46.210 "uuid": "d5e44ba8-6e24-4eeb-b2dc-3486e161f264", 00:09:46.210 "is_configured": true, 00:09:46.210 "data_offset": 2048, 00:09:46.210 "data_size": 63488 00:09:46.210 } 00:09:46.210 ] 00:09:46.210 }' 00:09:46.210 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:46.210 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:46.813 [2024-07-15 10:17:11.555476] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:46.813 "name": "Existed_Raid", 00:09:46.813 "aliases": [ 00:09:46.813 "49326d6b-8490-46e4-886f-3c4e41c2bc63" 00:09:46.813 ], 00:09:46.813 "product_name": "Raid Volume", 00:09:46.813 "block_size": 512, 00:09:46.813 "num_blocks": 126976, 00:09:46.813 "uuid": "49326d6b-8490-46e4-886f-3c4e41c2bc63", 00:09:46.813 "assigned_rate_limits": { 00:09:46.813 "rw_ios_per_sec": 0, 00:09:46.813 "rw_mbytes_per_sec": 0, 00:09:46.813 "r_mbytes_per_sec": 0, 00:09:46.813 "w_mbytes_per_sec": 0 00:09:46.813 }, 00:09:46.813 "claimed": false, 00:09:46.813 "zoned": false, 00:09:46.813 "supported_io_types": { 00:09:46.813 "read": true, 00:09:46.813 "write": true, 00:09:46.813 "unmap": true, 00:09:46.813 "flush": true, 00:09:46.813 "reset": true, 00:09:46.813 "nvme_admin": false, 00:09:46.813 "nvme_io": false, 00:09:46.813 "nvme_io_md": false, 00:09:46.813 "write_zeroes": true, 00:09:46.813 "zcopy": false, 00:09:46.813 "get_zone_info": false, 00:09:46.813 "zone_management": false, 00:09:46.813 "zone_append": false, 00:09:46.813 "compare": false, 00:09:46.813 "compare_and_write": false, 00:09:46.813 "abort": false, 00:09:46.813 "seek_hole": false, 00:09:46.813 "seek_data": false, 00:09:46.813 "copy": false, 00:09:46.813 "nvme_iov_md": false 00:09:46.813 }, 00:09:46.813 "memory_domains": [ 00:09:46.813 { 00:09:46.813 "dma_device_id": "system", 00:09:46.813 "dma_device_type": 1 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.813 "dma_device_type": 2 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "dma_device_id": "system", 00:09:46.813 "dma_device_type": 1 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.813 "dma_device_type": 2 00:09:46.813 } 00:09:46.813 ], 00:09:46.813 "driver_specific": { 00:09:46.813 "raid": { 00:09:46.813 "uuid": "49326d6b-8490-46e4-886f-3c4e41c2bc63", 00:09:46.813 "strip_size_kb": 64, 00:09:46.813 "state": "online", 00:09:46.813 "raid_level": "raid0", 00:09:46.813 "superblock": true, 00:09:46.813 "num_base_bdevs": 2, 00:09:46.813 "num_base_bdevs_discovered": 2, 00:09:46.813 "num_base_bdevs_operational": 2, 00:09:46.813 "base_bdevs_list": [ 00:09:46.813 { 00:09:46.813 "name": "BaseBdev1", 00:09:46.813 "uuid": "019d2938-a408-40fd-97a4-314657b0a936", 00:09:46.813 "is_configured": true, 00:09:46.813 "data_offset": 2048, 00:09:46.813 "data_size": 63488 00:09:46.813 }, 00:09:46.813 { 00:09:46.813 "name": "BaseBdev2", 00:09:46.813 "uuid": "d5e44ba8-6e24-4eeb-b2dc-3486e161f264", 00:09:46.813 "is_configured": true, 00:09:46.813 "data_offset": 2048, 00:09:46.813 "data_size": 63488 00:09:46.813 } 00:09:46.813 ] 00:09:46.813 } 00:09:46.813 } 00:09:46.813 }' 00:09:46.813 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:47.072 BaseBdev2' 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.072 "name": "BaseBdev1", 00:09:47.072 "aliases": [ 00:09:47.072 "019d2938-a408-40fd-97a4-314657b0a936" 00:09:47.072 ], 00:09:47.072 "product_name": "Malloc disk", 00:09:47.072 "block_size": 512, 00:09:47.072 "num_blocks": 65536, 00:09:47.072 "uuid": "019d2938-a408-40fd-97a4-314657b0a936", 00:09:47.072 "assigned_rate_limits": { 00:09:47.072 "rw_ios_per_sec": 0, 00:09:47.072 "rw_mbytes_per_sec": 0, 00:09:47.072 "r_mbytes_per_sec": 0, 00:09:47.072 "w_mbytes_per_sec": 0 00:09:47.072 }, 00:09:47.072 "claimed": true, 00:09:47.072 "claim_type": "exclusive_write", 00:09:47.072 "zoned": false, 00:09:47.072 "supported_io_types": { 00:09:47.072 "read": true, 00:09:47.072 "write": true, 00:09:47.072 "unmap": true, 00:09:47.072 "flush": true, 00:09:47.072 "reset": true, 00:09:47.072 "nvme_admin": false, 00:09:47.072 "nvme_io": false, 00:09:47.072 "nvme_io_md": false, 00:09:47.072 "write_zeroes": true, 00:09:47.072 "zcopy": true, 00:09:47.072 "get_zone_info": false, 00:09:47.072 "zone_management": false, 00:09:47.072 "zone_append": false, 00:09:47.072 "compare": false, 00:09:47.072 "compare_and_write": false, 00:09:47.072 "abort": true, 00:09:47.072 "seek_hole": false, 00:09:47.072 "seek_data": false, 00:09:47.072 "copy": true, 00:09:47.072 "nvme_iov_md": false 00:09:47.072 }, 00:09:47.072 "memory_domains": [ 00:09:47.072 { 00:09:47.072 "dma_device_id": "system", 00:09:47.072 "dma_device_type": 1 00:09:47.072 }, 00:09:47.072 { 00:09:47.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.072 "dma_device_type": 2 00:09:47.072 } 00:09:47.072 ], 00:09:47.072 "driver_specific": {} 00:09:47.072 }' 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.072 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.330 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.330 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.330 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.330 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:47.330 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.330 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:47.330 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:47.589 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:47.589 "name": "BaseBdev2", 00:09:47.589 "aliases": [ 00:09:47.589 "d5e44ba8-6e24-4eeb-b2dc-3486e161f264" 00:09:47.589 ], 00:09:47.589 "product_name": "Malloc disk", 00:09:47.589 "block_size": 512, 00:09:47.589 "num_blocks": 65536, 00:09:47.589 "uuid": "d5e44ba8-6e24-4eeb-b2dc-3486e161f264", 00:09:47.589 "assigned_rate_limits": { 00:09:47.589 "rw_ios_per_sec": 0, 00:09:47.589 "rw_mbytes_per_sec": 0, 00:09:47.589 "r_mbytes_per_sec": 0, 00:09:47.589 "w_mbytes_per_sec": 0 00:09:47.589 }, 00:09:47.589 "claimed": true, 00:09:47.589 "claim_type": "exclusive_write", 00:09:47.589 "zoned": false, 00:09:47.589 "supported_io_types": { 00:09:47.589 "read": true, 00:09:47.589 "write": true, 00:09:47.589 "unmap": true, 00:09:47.589 "flush": true, 00:09:47.589 "reset": true, 00:09:47.589 "nvme_admin": false, 00:09:47.589 "nvme_io": false, 00:09:47.589 "nvme_io_md": false, 00:09:47.589 "write_zeroes": true, 00:09:47.589 "zcopy": true, 00:09:47.589 "get_zone_info": false, 00:09:47.589 "zone_management": false, 00:09:47.589 "zone_append": false, 00:09:47.589 "compare": false, 00:09:47.589 "compare_and_write": false, 00:09:47.589 "abort": true, 00:09:47.589 "seek_hole": false, 00:09:47.589 "seek_data": false, 00:09:47.589 "copy": true, 00:09:47.589 "nvme_iov_md": false 00:09:47.589 }, 00:09:47.589 "memory_domains": [ 00:09:47.589 { 00:09:47.589 "dma_device_id": "system", 00:09:47.589 "dma_device_type": 1 00:09:47.589 }, 00:09:47.589 { 00:09:47.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.589 "dma_device_type": 2 00:09:47.589 } 00:09:47.589 ], 00:09:47.589 "driver_specific": {} 00:09:47.589 }' 00:09:47.589 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.589 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:47.589 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:47.589 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:47.847 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:48.105 [2024-07-15 10:17:12.726384] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.105 [2024-07-15 10:17:12.726404] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.105 [2024-07-15 10:17:12.726434] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:48.105 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.106 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.364 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:48.364 "name": "Existed_Raid", 00:09:48.364 "uuid": "49326d6b-8490-46e4-886f-3c4e41c2bc63", 00:09:48.364 "strip_size_kb": 64, 00:09:48.364 "state": "offline", 00:09:48.364 "raid_level": "raid0", 00:09:48.364 "superblock": true, 00:09:48.364 "num_base_bdevs": 2, 00:09:48.364 "num_base_bdevs_discovered": 1, 00:09:48.364 "num_base_bdevs_operational": 1, 00:09:48.364 "base_bdevs_list": [ 00:09:48.364 { 00:09:48.364 "name": null, 00:09:48.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.364 "is_configured": false, 00:09:48.364 "data_offset": 2048, 00:09:48.364 "data_size": 63488 00:09:48.364 }, 00:09:48.364 { 00:09:48.364 "name": "BaseBdev2", 00:09:48.364 "uuid": "d5e44ba8-6e24-4eeb-b2dc-3486e161f264", 00:09:48.364 "is_configured": true, 00:09:48.364 "data_offset": 2048, 00:09:48.364 "data_size": 63488 00:09:48.364 } 00:09:48.364 ] 00:09:48.364 }' 00:09:48.364 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:48.364 10:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.622 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:48.622 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:48.622 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.622 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:48.879 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:48.879 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.879 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:49.136 [2024-07-15 10:17:13.701705] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.136 [2024-07-15 10:17:13.701739] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb11600 name Existed_Raid, state offline 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:09:49.136 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1743983 00:09:49.137 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1743983 ']' 00:09:49.137 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1743983 00:09:49.137 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:49.137 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.137 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1743983 00:09:49.394 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.394 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.394 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1743983' 00:09:49.394 killing process with pid 1743983 00:09:49.394 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1743983 00:09:49.394 [2024-07-15 10:17:13.955686] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.394 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1743983 00:09:49.394 [2024-07-15 10:17:13.956473] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.394 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:49.394 00:09:49.394 real 0m8.093s 00:09:49.394 user 0m14.297s 00:09:49.394 sys 0m1.570s 00:09:49.394 10:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.394 10:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.394 ************************************ 00:09:49.394 END TEST raid_state_function_test_sb 00:09:49.394 ************************************ 00:09:49.394 10:17:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:49.394 10:17:14 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:49.394 10:17:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:49.394 10:17:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.394 10:17:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.652 ************************************ 00:09:49.652 START TEST raid_superblock_test 00:09:49.652 ************************************ 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1745760 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1745760 /var/tmp/spdk-raid.sock 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1745760 ']' 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:49.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.652 10:17:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.652 [2024-07-15 10:17:14.262826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:49.652 [2024-07-15 10:17:14.262870] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745760 ] 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:49.652 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.652 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:49.652 [2024-07-15 10:17:14.354726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.652 [2024-07-15 10:17:14.428387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.910 [2024-07-15 10:17:14.482093] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.910 [2024-07-15 10:17:14.482121] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:50.475 malloc1 00:09:50.475 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.733 [2024-07-15 10:17:15.382128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.733 [2024-07-15 10:17:15.382166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.733 [2024-07-15 10:17:15.382185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9322f0 00:09:50.733 [2024-07-15 10:17:15.382193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.733 [2024-07-15 10:17:15.383339] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.733 [2024-07-15 10:17:15.383361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.733 pt1 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:50.733 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:50.992 malloc2 00:09:50.992 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.992 [2024-07-15 10:17:15.722682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.992 [2024-07-15 10:17:15.722716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.992 [2024-07-15 10:17:15.722728] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9336d0 00:09:50.992 [2024-07-15 10:17:15.722736] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.992 [2024-07-15 10:17:15.723753] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.992 [2024-07-15 10:17:15.723774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.992 pt2 00:09:50.992 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:50.992 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:50.992 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:51.250 [2024-07-15 10:17:15.879119] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.250 [2024-07-15 10:17:15.879993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.250 [2024-07-15 10:17:15.880091] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xacc310 00:09:51.250 [2024-07-15 10:17:15.880099] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:51.250 [2024-07-15 10:17:15.880227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xacbce0 00:09:51.250 [2024-07-15 10:17:15.880318] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xacc310 00:09:51.250 [2024-07-15 10:17:15.880324] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xacc310 00:09:51.250 [2024-07-15 10:17:15.880387] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.250 10:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.509 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.509 "name": "raid_bdev1", 00:09:51.509 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:51.509 "strip_size_kb": 64, 00:09:51.509 "state": "online", 00:09:51.509 "raid_level": "raid0", 00:09:51.509 "superblock": true, 00:09:51.509 "num_base_bdevs": 2, 00:09:51.509 "num_base_bdevs_discovered": 2, 00:09:51.509 "num_base_bdevs_operational": 2, 00:09:51.509 "base_bdevs_list": [ 00:09:51.509 { 00:09:51.509 "name": "pt1", 00:09:51.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.509 "is_configured": true, 00:09:51.509 "data_offset": 2048, 00:09:51.509 "data_size": 63488 00:09:51.509 }, 00:09:51.509 { 00:09:51.509 "name": "pt2", 00:09:51.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.509 "is_configured": true, 00:09:51.509 "data_offset": 2048, 00:09:51.509 "data_size": 63488 00:09:51.509 } 00:09:51.509 ] 00:09:51.509 }' 00:09:51.509 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.509 10:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:51.767 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:52.024 [2024-07-15 10:17:16.685318] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.025 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:52.025 "name": "raid_bdev1", 00:09:52.025 "aliases": [ 00:09:52.025 "0f481865-905b-4e6f-9bd3-a66c5ad04826" 00:09:52.025 ], 00:09:52.025 "product_name": "Raid Volume", 00:09:52.025 "block_size": 512, 00:09:52.025 "num_blocks": 126976, 00:09:52.025 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:52.025 "assigned_rate_limits": { 00:09:52.025 "rw_ios_per_sec": 0, 00:09:52.025 "rw_mbytes_per_sec": 0, 00:09:52.025 "r_mbytes_per_sec": 0, 00:09:52.025 "w_mbytes_per_sec": 0 00:09:52.025 }, 00:09:52.025 "claimed": false, 00:09:52.025 "zoned": false, 00:09:52.025 "supported_io_types": { 00:09:52.025 "read": true, 00:09:52.025 "write": true, 00:09:52.025 "unmap": true, 00:09:52.025 "flush": true, 00:09:52.025 "reset": true, 00:09:52.025 "nvme_admin": false, 00:09:52.025 "nvme_io": false, 00:09:52.025 "nvme_io_md": false, 00:09:52.025 "write_zeroes": true, 00:09:52.025 "zcopy": false, 00:09:52.025 "get_zone_info": false, 00:09:52.025 "zone_management": false, 00:09:52.025 "zone_append": false, 00:09:52.025 "compare": false, 00:09:52.025 "compare_and_write": false, 00:09:52.025 "abort": false, 00:09:52.025 "seek_hole": false, 00:09:52.025 "seek_data": false, 00:09:52.025 "copy": false, 00:09:52.025 "nvme_iov_md": false 00:09:52.025 }, 00:09:52.025 "memory_domains": [ 00:09:52.025 { 00:09:52.025 "dma_device_id": "system", 00:09:52.025 "dma_device_type": 1 00:09:52.025 }, 00:09:52.025 { 00:09:52.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.025 "dma_device_type": 2 00:09:52.025 }, 00:09:52.025 { 00:09:52.025 "dma_device_id": "system", 00:09:52.025 "dma_device_type": 1 00:09:52.025 }, 00:09:52.025 { 00:09:52.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.025 "dma_device_type": 2 00:09:52.025 } 00:09:52.025 ], 00:09:52.025 "driver_specific": { 00:09:52.025 "raid": { 00:09:52.025 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:52.025 "strip_size_kb": 64, 00:09:52.025 "state": "online", 00:09:52.025 "raid_level": "raid0", 00:09:52.025 "superblock": true, 00:09:52.025 "num_base_bdevs": 2, 00:09:52.025 "num_base_bdevs_discovered": 2, 00:09:52.025 "num_base_bdevs_operational": 2, 00:09:52.025 "base_bdevs_list": [ 00:09:52.025 { 00:09:52.025 "name": "pt1", 00:09:52.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.025 "is_configured": true, 00:09:52.025 "data_offset": 2048, 00:09:52.025 "data_size": 63488 00:09:52.025 }, 00:09:52.025 { 00:09:52.025 "name": "pt2", 00:09:52.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.025 "is_configured": true, 00:09:52.025 "data_offset": 2048, 00:09:52.025 "data_size": 63488 00:09:52.025 } 00:09:52.025 ] 00:09:52.025 } 00:09:52.025 } 00:09:52.025 }' 00:09:52.025 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.025 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:52.025 pt2' 00:09:52.025 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.025 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:52.025 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.282 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.282 "name": "pt1", 00:09:52.282 "aliases": [ 00:09:52.282 "00000000-0000-0000-0000-000000000001" 00:09:52.282 ], 00:09:52.282 "product_name": "passthru", 00:09:52.282 "block_size": 512, 00:09:52.282 "num_blocks": 65536, 00:09:52.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.282 "assigned_rate_limits": { 00:09:52.282 "rw_ios_per_sec": 0, 00:09:52.282 "rw_mbytes_per_sec": 0, 00:09:52.282 "r_mbytes_per_sec": 0, 00:09:52.282 "w_mbytes_per_sec": 0 00:09:52.282 }, 00:09:52.282 "claimed": true, 00:09:52.282 "claim_type": "exclusive_write", 00:09:52.282 "zoned": false, 00:09:52.282 "supported_io_types": { 00:09:52.282 "read": true, 00:09:52.282 "write": true, 00:09:52.282 "unmap": true, 00:09:52.282 "flush": true, 00:09:52.282 "reset": true, 00:09:52.283 "nvme_admin": false, 00:09:52.283 "nvme_io": false, 00:09:52.283 "nvme_io_md": false, 00:09:52.283 "write_zeroes": true, 00:09:52.283 "zcopy": true, 00:09:52.283 "get_zone_info": false, 00:09:52.283 "zone_management": false, 00:09:52.283 "zone_append": false, 00:09:52.283 "compare": false, 00:09:52.283 "compare_and_write": false, 00:09:52.283 "abort": true, 00:09:52.283 "seek_hole": false, 00:09:52.283 "seek_data": false, 00:09:52.283 "copy": true, 00:09:52.283 "nvme_iov_md": false 00:09:52.283 }, 00:09:52.283 "memory_domains": [ 00:09:52.283 { 00:09:52.283 "dma_device_id": "system", 00:09:52.283 "dma_device_type": 1 00:09:52.283 }, 00:09:52.283 { 00:09:52.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.283 "dma_device_type": 2 00:09:52.283 } 00:09:52.283 ], 00:09:52.283 "driver_specific": { 00:09:52.283 "passthru": { 00:09:52.283 "name": "pt1", 00:09:52.283 "base_bdev_name": "malloc1" 00:09:52.283 } 00:09:52.283 } 00:09:52.283 }' 00:09:52.283 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.283 10:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.283 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.283 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.283 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:52.540 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:52.798 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:52.798 "name": "pt2", 00:09:52.798 "aliases": [ 00:09:52.798 "00000000-0000-0000-0000-000000000002" 00:09:52.798 ], 00:09:52.798 "product_name": "passthru", 00:09:52.798 "block_size": 512, 00:09:52.798 "num_blocks": 65536, 00:09:52.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.798 "assigned_rate_limits": { 00:09:52.798 "rw_ios_per_sec": 0, 00:09:52.798 "rw_mbytes_per_sec": 0, 00:09:52.798 "r_mbytes_per_sec": 0, 00:09:52.798 "w_mbytes_per_sec": 0 00:09:52.798 }, 00:09:52.798 "claimed": true, 00:09:52.798 "claim_type": "exclusive_write", 00:09:52.798 "zoned": false, 00:09:52.798 "supported_io_types": { 00:09:52.798 "read": true, 00:09:52.798 "write": true, 00:09:52.798 "unmap": true, 00:09:52.798 "flush": true, 00:09:52.798 "reset": true, 00:09:52.798 "nvme_admin": false, 00:09:52.798 "nvme_io": false, 00:09:52.798 "nvme_io_md": false, 00:09:52.798 "write_zeroes": true, 00:09:52.798 "zcopy": true, 00:09:52.798 "get_zone_info": false, 00:09:52.799 "zone_management": false, 00:09:52.799 "zone_append": false, 00:09:52.799 "compare": false, 00:09:52.799 "compare_and_write": false, 00:09:52.799 "abort": true, 00:09:52.799 "seek_hole": false, 00:09:52.799 "seek_data": false, 00:09:52.799 "copy": true, 00:09:52.799 "nvme_iov_md": false 00:09:52.799 }, 00:09:52.799 "memory_domains": [ 00:09:52.799 { 00:09:52.799 "dma_device_id": "system", 00:09:52.799 "dma_device_type": 1 00:09:52.799 }, 00:09:52.799 { 00:09:52.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.799 "dma_device_type": 2 00:09:52.799 } 00:09:52.799 ], 00:09:52.799 "driver_specific": { 00:09:52.799 "passthru": { 00:09:52.799 "name": "pt2", 00:09:52.799 "base_bdev_name": "malloc2" 00:09:52.799 } 00:09:52.799 } 00:09:52.799 }' 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:52.799 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:53.056 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:53.056 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.056 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:53.056 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:53.056 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:53.056 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:53.314 [2024-07-15 10:17:17.852322] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.314 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0f481865-905b-4e6f-9bd3-a66c5ad04826 00:09:53.314 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0f481865-905b-4e6f-9bd3-a66c5ad04826 ']' 00:09:53.314 10:17:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:53.314 [2024-07-15 10:17:18.024617] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.314 [2024-07-15 10:17:18.024632] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.314 [2024-07-15 10:17:18.024668] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.314 [2024-07-15 10:17:18.024696] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.314 [2024-07-15 10:17:18.024703] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xacc310 name raid_bdev1, state offline 00:09:53.314 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.314 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:53.571 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:53.571 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:53.571 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.571 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:53.829 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.829 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:53.829 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:53.829 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:54.086 [2024-07-15 10:17:18.846727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:54.086 [2024-07-15 10:17:18.847646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:54.086 [2024-07-15 10:17:18.847686] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:54.086 [2024-07-15 10:17:18.847715] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:54.086 [2024-07-15 10:17:18.847742] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.086 [2024-07-15 10:17:18.847749] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xad53f0 name raid_bdev1, state configuring 00:09:54.086 request: 00:09:54.086 { 00:09:54.086 "name": "raid_bdev1", 00:09:54.086 "raid_level": "raid0", 00:09:54.086 "base_bdevs": [ 00:09:54.086 "malloc1", 00:09:54.086 "malloc2" 00:09:54.086 ], 00:09:54.086 "strip_size_kb": 64, 00:09:54.086 "superblock": false, 00:09:54.086 "method": "bdev_raid_create", 00:09:54.086 "req_id": 1 00:09:54.086 } 00:09:54.086 Got JSON-RPC error response 00:09:54.086 response: 00:09:54.086 { 00:09:54.086 "code": -17, 00:09:54.086 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:54.086 } 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:54.086 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:54.087 10:17:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:54.087 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.087 10:17:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:54.345 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:54.345 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:54.345 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.603 [2024-07-15 10:17:19.175550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.603 [2024-07-15 10:17:19.175584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.603 [2024-07-15 10:17:19.175596] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad5d70 00:09:54.603 [2024-07-15 10:17:19.175620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.603 [2024-07-15 10:17:19.176762] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.603 [2024-07-15 10:17:19.176785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.603 [2024-07-15 10:17:19.176832] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:54.603 [2024-07-15 10:17:19.176850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.603 pt1 00:09:54.603 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:54.603 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:54.603 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:54.603 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:54.603 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.603 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.604 "name": "raid_bdev1", 00:09:54.604 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:54.604 "strip_size_kb": 64, 00:09:54.604 "state": "configuring", 00:09:54.604 "raid_level": "raid0", 00:09:54.604 "superblock": true, 00:09:54.604 "num_base_bdevs": 2, 00:09:54.604 "num_base_bdevs_discovered": 1, 00:09:54.604 "num_base_bdevs_operational": 2, 00:09:54.604 "base_bdevs_list": [ 00:09:54.604 { 00:09:54.604 "name": "pt1", 00:09:54.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.604 "is_configured": true, 00:09:54.604 "data_offset": 2048, 00:09:54.604 "data_size": 63488 00:09:54.604 }, 00:09:54.604 { 00:09:54.604 "name": null, 00:09:54.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.604 "is_configured": false, 00:09:54.604 "data_offset": 2048, 00:09:54.604 "data_size": 63488 00:09:54.604 } 00:09:54.604 ] 00:09:54.604 }' 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.604 10:17:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.170 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:09:55.170 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:55.170 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:55.170 10:17:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:55.428 [2024-07-15 10:17:19.993649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:55.428 [2024-07-15 10:17:19.993687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.428 [2024-07-15 10:17:19.993700] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaccbb0 00:09:55.428 [2024-07-15 10:17:19.993709] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.428 [2024-07-15 10:17:19.993980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.428 [2024-07-15 10:17:19.993994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:55.428 [2024-07-15 10:17:19.994040] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:55.428 [2024-07-15 10:17:19.994054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:55.428 [2024-07-15 10:17:19.994118] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xacb120 00:09:55.428 [2024-07-15 10:17:19.994125] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:55.428 [2024-07-15 10:17:19.994235] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x933960 00:09:55.428 [2024-07-15 10:17:19.994313] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xacb120 00:09:55.428 [2024-07-15 10:17:19.994320] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xacb120 00:09:55.428 [2024-07-15 10:17:19.994381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.428 pt2 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.428 "name": "raid_bdev1", 00:09:55.428 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:55.428 "strip_size_kb": 64, 00:09:55.428 "state": "online", 00:09:55.428 "raid_level": "raid0", 00:09:55.428 "superblock": true, 00:09:55.428 "num_base_bdevs": 2, 00:09:55.428 "num_base_bdevs_discovered": 2, 00:09:55.428 "num_base_bdevs_operational": 2, 00:09:55.428 "base_bdevs_list": [ 00:09:55.428 { 00:09:55.428 "name": "pt1", 00:09:55.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.428 "is_configured": true, 00:09:55.428 "data_offset": 2048, 00:09:55.428 "data_size": 63488 00:09:55.428 }, 00:09:55.428 { 00:09:55.428 "name": "pt2", 00:09:55.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.428 "is_configured": true, 00:09:55.428 "data_offset": 2048, 00:09:55.428 "data_size": 63488 00:09:55.428 } 00:09:55.428 ] 00:09:55.428 }' 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.428 10:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:55.993 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:56.253 [2024-07-15 10:17:20.852025] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.253 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:56.253 "name": "raid_bdev1", 00:09:56.253 "aliases": [ 00:09:56.253 "0f481865-905b-4e6f-9bd3-a66c5ad04826" 00:09:56.253 ], 00:09:56.253 "product_name": "Raid Volume", 00:09:56.253 "block_size": 512, 00:09:56.253 "num_blocks": 126976, 00:09:56.253 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:56.253 "assigned_rate_limits": { 00:09:56.253 "rw_ios_per_sec": 0, 00:09:56.253 "rw_mbytes_per_sec": 0, 00:09:56.253 "r_mbytes_per_sec": 0, 00:09:56.253 "w_mbytes_per_sec": 0 00:09:56.253 }, 00:09:56.253 "claimed": false, 00:09:56.253 "zoned": false, 00:09:56.253 "supported_io_types": { 00:09:56.253 "read": true, 00:09:56.253 "write": true, 00:09:56.253 "unmap": true, 00:09:56.253 "flush": true, 00:09:56.253 "reset": true, 00:09:56.253 "nvme_admin": false, 00:09:56.253 "nvme_io": false, 00:09:56.253 "nvme_io_md": false, 00:09:56.253 "write_zeroes": true, 00:09:56.253 "zcopy": false, 00:09:56.253 "get_zone_info": false, 00:09:56.253 "zone_management": false, 00:09:56.253 "zone_append": false, 00:09:56.253 "compare": false, 00:09:56.253 "compare_and_write": false, 00:09:56.253 "abort": false, 00:09:56.253 "seek_hole": false, 00:09:56.253 "seek_data": false, 00:09:56.253 "copy": false, 00:09:56.253 "nvme_iov_md": false 00:09:56.253 }, 00:09:56.253 "memory_domains": [ 00:09:56.253 { 00:09:56.253 "dma_device_id": "system", 00:09:56.253 "dma_device_type": 1 00:09:56.253 }, 00:09:56.253 { 00:09:56.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.253 "dma_device_type": 2 00:09:56.253 }, 00:09:56.253 { 00:09:56.253 "dma_device_id": "system", 00:09:56.253 "dma_device_type": 1 00:09:56.253 }, 00:09:56.253 { 00:09:56.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.253 "dma_device_type": 2 00:09:56.253 } 00:09:56.253 ], 00:09:56.253 "driver_specific": { 00:09:56.253 "raid": { 00:09:56.253 "uuid": "0f481865-905b-4e6f-9bd3-a66c5ad04826", 00:09:56.253 "strip_size_kb": 64, 00:09:56.253 "state": "online", 00:09:56.253 "raid_level": "raid0", 00:09:56.253 "superblock": true, 00:09:56.253 "num_base_bdevs": 2, 00:09:56.253 "num_base_bdevs_discovered": 2, 00:09:56.253 "num_base_bdevs_operational": 2, 00:09:56.253 "base_bdevs_list": [ 00:09:56.253 { 00:09:56.253 "name": "pt1", 00:09:56.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.253 "is_configured": true, 00:09:56.253 "data_offset": 2048, 00:09:56.253 "data_size": 63488 00:09:56.253 }, 00:09:56.253 { 00:09:56.253 "name": "pt2", 00:09:56.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.253 "is_configured": true, 00:09:56.253 "data_offset": 2048, 00:09:56.253 "data_size": 63488 00:09:56.253 } 00:09:56.253 ] 00:09:56.253 } 00:09:56.253 } 00:09:56.253 }' 00:09:56.253 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.253 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:56.253 pt2' 00:09:56.253 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:56.253 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:56.253 10:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:56.511 "name": "pt1", 00:09:56.511 "aliases": [ 00:09:56.511 "00000000-0000-0000-0000-000000000001" 00:09:56.511 ], 00:09:56.511 "product_name": "passthru", 00:09:56.511 "block_size": 512, 00:09:56.511 "num_blocks": 65536, 00:09:56.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.511 "assigned_rate_limits": { 00:09:56.511 "rw_ios_per_sec": 0, 00:09:56.511 "rw_mbytes_per_sec": 0, 00:09:56.511 "r_mbytes_per_sec": 0, 00:09:56.511 "w_mbytes_per_sec": 0 00:09:56.511 }, 00:09:56.511 "claimed": true, 00:09:56.511 "claim_type": "exclusive_write", 00:09:56.511 "zoned": false, 00:09:56.511 "supported_io_types": { 00:09:56.511 "read": true, 00:09:56.511 "write": true, 00:09:56.511 "unmap": true, 00:09:56.511 "flush": true, 00:09:56.511 "reset": true, 00:09:56.511 "nvme_admin": false, 00:09:56.511 "nvme_io": false, 00:09:56.511 "nvme_io_md": false, 00:09:56.511 "write_zeroes": true, 00:09:56.511 "zcopy": true, 00:09:56.511 "get_zone_info": false, 00:09:56.511 "zone_management": false, 00:09:56.511 "zone_append": false, 00:09:56.511 "compare": false, 00:09:56.511 "compare_and_write": false, 00:09:56.511 "abort": true, 00:09:56.511 "seek_hole": false, 00:09:56.511 "seek_data": false, 00:09:56.511 "copy": true, 00:09:56.511 "nvme_iov_md": false 00:09:56.511 }, 00:09:56.511 "memory_domains": [ 00:09:56.511 { 00:09:56.511 "dma_device_id": "system", 00:09:56.511 "dma_device_type": 1 00:09:56.511 }, 00:09:56.511 { 00:09:56.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.511 "dma_device_type": 2 00:09:56.511 } 00:09:56.511 ], 00:09:56.511 "driver_specific": { 00:09:56.511 "passthru": { 00:09:56.511 "name": "pt1", 00:09:56.511 "base_bdev_name": "malloc1" 00:09:56.511 } 00:09:56.511 } 00:09:56.511 }' 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.511 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:56.770 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:57.028 "name": "pt2", 00:09:57.028 "aliases": [ 00:09:57.028 "00000000-0000-0000-0000-000000000002" 00:09:57.028 ], 00:09:57.028 "product_name": "passthru", 00:09:57.028 "block_size": 512, 00:09:57.028 "num_blocks": 65536, 00:09:57.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.028 "assigned_rate_limits": { 00:09:57.028 "rw_ios_per_sec": 0, 00:09:57.028 "rw_mbytes_per_sec": 0, 00:09:57.028 "r_mbytes_per_sec": 0, 00:09:57.028 "w_mbytes_per_sec": 0 00:09:57.028 }, 00:09:57.028 "claimed": true, 00:09:57.028 "claim_type": "exclusive_write", 00:09:57.028 "zoned": false, 00:09:57.028 "supported_io_types": { 00:09:57.028 "read": true, 00:09:57.028 "write": true, 00:09:57.028 "unmap": true, 00:09:57.028 "flush": true, 00:09:57.028 "reset": true, 00:09:57.028 "nvme_admin": false, 00:09:57.028 "nvme_io": false, 00:09:57.028 "nvme_io_md": false, 00:09:57.028 "write_zeroes": true, 00:09:57.028 "zcopy": true, 00:09:57.028 "get_zone_info": false, 00:09:57.028 "zone_management": false, 00:09:57.028 "zone_append": false, 00:09:57.028 "compare": false, 00:09:57.028 "compare_and_write": false, 00:09:57.028 "abort": true, 00:09:57.028 "seek_hole": false, 00:09:57.028 "seek_data": false, 00:09:57.028 "copy": true, 00:09:57.028 "nvme_iov_md": false 00:09:57.028 }, 00:09:57.028 "memory_domains": [ 00:09:57.028 { 00:09:57.028 "dma_device_id": "system", 00:09:57.028 "dma_device_type": 1 00:09:57.028 }, 00:09:57.028 { 00:09:57.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.028 "dma_device_type": 2 00:09:57.028 } 00:09:57.028 ], 00:09:57.028 "driver_specific": { 00:09:57.028 "passthru": { 00:09:57.028 "name": "pt2", 00:09:57.028 "base_bdev_name": "malloc2" 00:09:57.028 } 00:09:57.028 } 00:09:57.028 }' 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:57.028 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.286 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:57.286 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:57.286 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:57.286 10:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:57.286 [2024-07-15 10:17:22.035077] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0f481865-905b-4e6f-9bd3-a66c5ad04826 '!=' 0f481865-905b-4e6f-9bd3-a66c5ad04826 ']' 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1745760 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1745760 ']' 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1745760 00:09:57.286 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:57.287 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.287 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745760 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745760' 00:09:57.545 killing process with pid 1745760 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1745760 00:09:57.545 [2024-07-15 10:17:22.102353] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.545 [2024-07-15 10:17:22.102393] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.545 [2024-07-15 10:17:22.102419] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.545 [2024-07-15 10:17:22.102426] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xacb120 name raid_bdev1, state offline 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1745760 00:09:57.545 [2024-07-15 10:17:22.117573] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:57.545 00:09:57.545 real 0m8.081s 00:09:57.545 user 0m14.322s 00:09:57.545 sys 0m1.559s 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.545 10:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.545 ************************************ 00:09:57.545 END TEST raid_superblock_test 00:09:57.545 ************************************ 00:09:57.545 10:17:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:57.545 10:17:22 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:57.545 10:17:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:57.545 10:17:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.545 10:17:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.803 ************************************ 00:09:57.803 START TEST raid_read_error_test 00:09:57.803 ************************************ 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.swG437DdYT 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1747342 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1747342 /var/tmp/spdk-raid.sock 00:09:57.803 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1747342 ']' 00:09:57.804 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:57.804 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.804 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:57.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:57.804 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.804 10:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.804 [2024-07-15 10:17:22.416328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:57.804 [2024-07-15 10:17:22.416372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1747342 ] 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:57.804 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:57.804 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:57.804 [2024-07-15 10:17:22.507239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.804 [2024-07-15 10:17:22.581365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.062 [2024-07-15 10:17:22.634823] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.062 [2024-07-15 10:17:22.634850] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.631 10:17:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:58.631 10:17:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:58.631 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:58.631 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:58.631 BaseBdev1_malloc 00:09:58.631 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:58.935 true 00:09:58.935 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:59.195 [2024-07-15 10:17:23.715062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:59.195 [2024-07-15 10:17:23.715096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.195 [2024-07-15 10:17:23.715111] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d0b190 00:09:59.195 [2024-07-15 10:17:23.715119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.195 [2024-07-15 10:17:23.716310] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.195 [2024-07-15 10:17:23.716331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:59.195 BaseBdev1 00:09:59.195 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:59.195 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:59.195 BaseBdev2_malloc 00:09:59.195 10:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:59.453 true 00:09:59.453 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:59.453 [2024-07-15 10:17:24.227792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:59.453 [2024-07-15 10:17:24.227823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.453 [2024-07-15 10:17:24.227836] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d0fe20 00:09:59.453 [2024-07-15 10:17:24.227860] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.453 [2024-07-15 10:17:24.228844] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.453 [2024-07-15 10:17:24.228865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:59.453 BaseBdev2 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:59.711 [2024-07-15 10:17:24.392236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.711 [2024-07-15 10:17:24.392986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.711 [2024-07-15 10:17:24.393102] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d11a50 00:09:59.711 [2024-07-15 10:17:24.393111] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:59.711 [2024-07-15 10:17:24.393219] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b66070 00:09:59.711 [2024-07-15 10:17:24.393310] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d11a50 00:09:59.711 [2024-07-15 10:17:24.393316] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d11a50 00:09:59.711 [2024-07-15 10:17:24.393377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.711 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.969 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.969 "name": "raid_bdev1", 00:09:59.969 "uuid": "c12d41b3-0bc5-43bb-9314-f685f854e9b2", 00:09:59.969 "strip_size_kb": 64, 00:09:59.969 "state": "online", 00:09:59.969 "raid_level": "raid0", 00:09:59.969 "superblock": true, 00:09:59.969 "num_base_bdevs": 2, 00:09:59.969 "num_base_bdevs_discovered": 2, 00:09:59.969 "num_base_bdevs_operational": 2, 00:09:59.969 "base_bdevs_list": [ 00:09:59.969 { 00:09:59.969 "name": "BaseBdev1", 00:09:59.969 "uuid": "8538cbf8-3451-5a64-a7e6-854044777699", 00:09:59.969 "is_configured": true, 00:09:59.969 "data_offset": 2048, 00:09:59.969 "data_size": 63488 00:09:59.969 }, 00:09:59.969 { 00:09:59.969 "name": "BaseBdev2", 00:09:59.969 "uuid": "edf816f8-81fc-5517-b3a6-0230cac9fd57", 00:09:59.969 "is_configured": true, 00:09:59.969 "data_offset": 2048, 00:09:59.969 "data_size": 63488 00:09:59.969 } 00:09:59.969 ] 00:09:59.969 }' 00:09:59.969 10:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.969 10:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.534 10:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:00.534 10:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:00.534 [2024-07-15 10:17:25.166425] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d0ca80 00:10:01.468 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:01.726 "name": "raid_bdev1", 00:10:01.726 "uuid": "c12d41b3-0bc5-43bb-9314-f685f854e9b2", 00:10:01.726 "strip_size_kb": 64, 00:10:01.726 "state": "online", 00:10:01.726 "raid_level": "raid0", 00:10:01.726 "superblock": true, 00:10:01.726 "num_base_bdevs": 2, 00:10:01.726 "num_base_bdevs_discovered": 2, 00:10:01.726 "num_base_bdevs_operational": 2, 00:10:01.726 "base_bdevs_list": [ 00:10:01.726 { 00:10:01.726 "name": "BaseBdev1", 00:10:01.726 "uuid": "8538cbf8-3451-5a64-a7e6-854044777699", 00:10:01.726 "is_configured": true, 00:10:01.726 "data_offset": 2048, 00:10:01.726 "data_size": 63488 00:10:01.726 }, 00:10:01.726 { 00:10:01.726 "name": "BaseBdev2", 00:10:01.726 "uuid": "edf816f8-81fc-5517-b3a6-0230cac9fd57", 00:10:01.726 "is_configured": true, 00:10:01.726 "data_offset": 2048, 00:10:01.726 "data_size": 63488 00:10:01.726 } 00:10:01.726 ] 00:10:01.726 }' 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:01.726 10:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.292 10:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:02.550 [2024-07-15 10:17:27.098076] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.550 [2024-07-15 10:17:27.098105] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.551 [2024-07-15 10:17:27.100122] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.551 [2024-07-15 10:17:27.100144] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.551 [2024-07-15 10:17:27.100162] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.551 [2024-07-15 10:17:27.100168] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d11a50 name raid_bdev1, state offline 00:10:02.551 0 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1747342 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1747342 ']' 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1747342 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1747342 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1747342' 00:10:02.551 killing process with pid 1747342 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1747342 00:10:02.551 [2024-07-15 10:17:27.173428] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.551 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1747342 00:10:02.551 [2024-07-15 10:17:27.182508] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.swG437DdYT 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:02.809 00:10:02.809 real 0m5.002s 00:10:02.809 user 0m7.548s 00:10:02.809 sys 0m0.875s 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.809 10:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.809 ************************************ 00:10:02.809 END TEST raid_read_error_test 00:10:02.809 ************************************ 00:10:02.809 10:17:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:02.809 10:17:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:02.809 10:17:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:02.809 10:17:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.809 10:17:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.809 ************************************ 00:10:02.809 START TEST raid_write_error_test 00:10:02.809 ************************************ 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:02.809 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NUf1F9j8iz 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1748242 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1748242 /var/tmp/spdk-raid.sock 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1748242 ']' 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.810 10:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.810 [2024-07-15 10:17:27.526549] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:02.810 [2024-07-15 10:17:27.526595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748242 ] 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:02.810 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.810 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:03.068 [2024-07-15 10:17:27.618172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.068 [2024-07-15 10:17:27.692647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.068 [2024-07-15 10:17:27.749895] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.068 [2024-07-15 10:17:27.749926] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.631 10:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.631 10:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:03.631 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.631 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:03.888 BaseBdev1_malloc 00:10:03.888 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:03.888 true 00:10:03.888 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.145 [2024-07-15 10:17:28.805914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.145 [2024-07-15 10:17:28.805946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.145 [2024-07-15 10:17:28.805961] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1147190 00:10:04.145 [2024-07-15 10:17:28.805969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.145 [2024-07-15 10:17:28.807110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.145 [2024-07-15 10:17:28.807133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.145 BaseBdev1 00:10:04.145 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:04.145 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.402 BaseBdev2_malloc 00:10:04.402 10:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:04.402 true 00:10:04.402 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.658 [2024-07-15 10:17:29.306835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.658 [2024-07-15 10:17:29.306867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.658 [2024-07-15 10:17:29.306883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x114be20 00:10:04.658 [2024-07-15 10:17:29.306891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.658 [2024-07-15 10:17:29.307931] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.658 [2024-07-15 10:17:29.307953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.658 BaseBdev2 00:10:04.659 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:04.916 [2024-07-15 10:17:29.463253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.916 [2024-07-15 10:17:29.464156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.916 [2024-07-15 10:17:29.464285] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x114da50 00:10:04.916 [2024-07-15 10:17:29.464295] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:04.916 [2024-07-15 10:17:29.464421] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfa2070 00:10:04.916 [2024-07-15 10:17:29.464522] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x114da50 00:10:04.916 [2024-07-15 10:17:29.464532] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x114da50 00:10:04.916 [2024-07-15 10:17:29.464600] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.916 "name": "raid_bdev1", 00:10:04.916 "uuid": "d898bf63-86d9-4bc4-afe6-93bb2150ffdf", 00:10:04.916 "strip_size_kb": 64, 00:10:04.916 "state": "online", 00:10:04.916 "raid_level": "raid0", 00:10:04.916 "superblock": true, 00:10:04.916 "num_base_bdevs": 2, 00:10:04.916 "num_base_bdevs_discovered": 2, 00:10:04.916 "num_base_bdevs_operational": 2, 00:10:04.916 "base_bdevs_list": [ 00:10:04.916 { 00:10:04.916 "name": "BaseBdev1", 00:10:04.916 "uuid": "997dff9e-d258-57f9-877e-9a4b4a059dd0", 00:10:04.916 "is_configured": true, 00:10:04.916 "data_offset": 2048, 00:10:04.916 "data_size": 63488 00:10:04.916 }, 00:10:04.916 { 00:10:04.916 "name": "BaseBdev2", 00:10:04.916 "uuid": "93693ee8-1c3b-5cdd-a4bd-b9e0d264cee8", 00:10:04.916 "is_configured": true, 00:10:04.916 "data_offset": 2048, 00:10:04.916 "data_size": 63488 00:10:04.916 } 00:10:04.916 ] 00:10:04.916 }' 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.916 10:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.479 10:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:05.479 10:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:05.479 [2024-07-15 10:17:30.213375] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1148a80 00:10:06.411 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:06.667 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:06.667 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:06.667 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:06.667 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:06.667 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.668 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.925 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.925 "name": "raid_bdev1", 00:10:06.925 "uuid": "d898bf63-86d9-4bc4-afe6-93bb2150ffdf", 00:10:06.925 "strip_size_kb": 64, 00:10:06.925 "state": "online", 00:10:06.925 "raid_level": "raid0", 00:10:06.925 "superblock": true, 00:10:06.925 "num_base_bdevs": 2, 00:10:06.925 "num_base_bdevs_discovered": 2, 00:10:06.925 "num_base_bdevs_operational": 2, 00:10:06.925 "base_bdevs_list": [ 00:10:06.925 { 00:10:06.925 "name": "BaseBdev1", 00:10:06.925 "uuid": "997dff9e-d258-57f9-877e-9a4b4a059dd0", 00:10:06.925 "is_configured": true, 00:10:06.925 "data_offset": 2048, 00:10:06.925 "data_size": 63488 00:10:06.925 }, 00:10:06.925 { 00:10:06.925 "name": "BaseBdev2", 00:10:06.925 "uuid": "93693ee8-1c3b-5cdd-a4bd-b9e0d264cee8", 00:10:06.925 "is_configured": true, 00:10:06.925 "data_offset": 2048, 00:10:06.925 "data_size": 63488 00:10:06.925 } 00:10:06.925 ] 00:10:06.925 }' 00:10:06.925 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.925 10:17:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.489 10:17:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:07.489 [2024-07-15 10:17:32.137029] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.489 [2024-07-15 10:17:32.137058] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.489 [2024-07-15 10:17:32.139177] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.489 [2024-07-15 10:17:32.139199] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.489 [2024-07-15 10:17:32.139218] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.489 [2024-07-15 10:17:32.139226] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x114da50 name raid_bdev1, state offline 00:10:07.489 0 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1748242 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1748242 ']' 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1748242 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1748242 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.489 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1748242' 00:10:07.490 killing process with pid 1748242 00:10:07.490 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1748242 00:10:07.490 [2024-07-15 10:17:32.210758] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.490 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1748242 00:10:07.490 [2024-07-15 10:17:32.219946] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NUf1F9j8iz 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:07.747 00:10:07.747 real 0m4.949s 00:10:07.747 user 0m7.444s 00:10:07.747 sys 0m0.854s 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.747 10:17:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.747 ************************************ 00:10:07.747 END TEST raid_write_error_test 00:10:07.747 ************************************ 00:10:07.747 10:17:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:07.747 10:17:32 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:07.747 10:17:32 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:07.747 10:17:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:07.747 10:17:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.747 10:17:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.747 ************************************ 00:10:07.747 START TEST raid_state_function_test 00:10:07.747 ************************************ 00:10:07.747 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1749173 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1749173' 00:10:07.748 Process raid pid: 1749173 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1749173 /var/tmp/spdk-raid.sock 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1749173 ']' 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:07.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.748 10:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.006 [2024-07-15 10:17:32.551661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:08.006 [2024-07-15 10:17:32.551706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:08.006 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:08.006 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:08.006 [2024-07-15 10:17:32.645307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.006 [2024-07-15 10:17:32.718808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.006 [2024-07-15 10:17:32.777711] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.006 [2024-07-15 10:17:32.777741] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.577 10:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.577 10:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:08.577 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:08.834 [2024-07-15 10:17:33.477950] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.834 [2024-07-15 10:17:33.477983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.834 [2024-07-15 10:17:33.477990] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.834 [2024-07-15 10:17:33.477997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.834 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.091 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.091 "name": "Existed_Raid", 00:10:09.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.091 "strip_size_kb": 64, 00:10:09.091 "state": "configuring", 00:10:09.091 "raid_level": "concat", 00:10:09.091 "superblock": false, 00:10:09.091 "num_base_bdevs": 2, 00:10:09.091 "num_base_bdevs_discovered": 0, 00:10:09.091 "num_base_bdevs_operational": 2, 00:10:09.091 "base_bdevs_list": [ 00:10:09.091 { 00:10:09.091 "name": "BaseBdev1", 00:10:09.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.091 "is_configured": false, 00:10:09.091 "data_offset": 0, 00:10:09.091 "data_size": 0 00:10:09.091 }, 00:10:09.091 { 00:10:09.091 "name": "BaseBdev2", 00:10:09.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.091 "is_configured": false, 00:10:09.091 "data_offset": 0, 00:10:09.091 "data_size": 0 00:10:09.091 } 00:10:09.091 ] 00:10:09.091 }' 00:10:09.091 10:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.091 10:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.654 10:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:09.654 [2024-07-15 10:17:34.316012] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.654 [2024-07-15 10:17:34.316032] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x173af20 name Existed_Raid, state configuring 00:10:09.654 10:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:09.911 [2024-07-15 10:17:34.496485] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.911 [2024-07-15 10:17:34.496503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.911 [2024-07-15 10:17:34.496509] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.911 [2024-07-15 10:17:34.496516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.911 [2024-07-15 10:17:34.669356] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.911 BaseBdev1 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:09.911 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:10.168 10:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.425 [ 00:10:10.425 { 00:10:10.425 "name": "BaseBdev1", 00:10:10.425 "aliases": [ 00:10:10.425 "ed91540d-30f3-4449-898d-10f8088c7a00" 00:10:10.425 ], 00:10:10.425 "product_name": "Malloc disk", 00:10:10.425 "block_size": 512, 00:10:10.425 "num_blocks": 65536, 00:10:10.425 "uuid": "ed91540d-30f3-4449-898d-10f8088c7a00", 00:10:10.425 "assigned_rate_limits": { 00:10:10.425 "rw_ios_per_sec": 0, 00:10:10.425 "rw_mbytes_per_sec": 0, 00:10:10.425 "r_mbytes_per_sec": 0, 00:10:10.425 "w_mbytes_per_sec": 0 00:10:10.425 }, 00:10:10.425 "claimed": true, 00:10:10.425 "claim_type": "exclusive_write", 00:10:10.425 "zoned": false, 00:10:10.425 "supported_io_types": { 00:10:10.425 "read": true, 00:10:10.425 "write": true, 00:10:10.425 "unmap": true, 00:10:10.425 "flush": true, 00:10:10.425 "reset": true, 00:10:10.425 "nvme_admin": false, 00:10:10.425 "nvme_io": false, 00:10:10.425 "nvme_io_md": false, 00:10:10.425 "write_zeroes": true, 00:10:10.425 "zcopy": true, 00:10:10.425 "get_zone_info": false, 00:10:10.425 "zone_management": false, 00:10:10.425 "zone_append": false, 00:10:10.425 "compare": false, 00:10:10.425 "compare_and_write": false, 00:10:10.425 "abort": true, 00:10:10.425 "seek_hole": false, 00:10:10.425 "seek_data": false, 00:10:10.425 "copy": true, 00:10:10.425 "nvme_iov_md": false 00:10:10.425 }, 00:10:10.425 "memory_domains": [ 00:10:10.425 { 00:10:10.425 "dma_device_id": "system", 00:10:10.425 "dma_device_type": 1 00:10:10.425 }, 00:10:10.425 { 00:10:10.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.425 "dma_device_type": 2 00:10:10.425 } 00:10:10.425 ], 00:10:10.425 "driver_specific": {} 00:10:10.425 } 00:10:10.425 ] 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.425 "name": "Existed_Raid", 00:10:10.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.425 "strip_size_kb": 64, 00:10:10.425 "state": "configuring", 00:10:10.425 "raid_level": "concat", 00:10:10.425 "superblock": false, 00:10:10.425 "num_base_bdevs": 2, 00:10:10.425 "num_base_bdevs_discovered": 1, 00:10:10.425 "num_base_bdevs_operational": 2, 00:10:10.425 "base_bdevs_list": [ 00:10:10.425 { 00:10:10.425 "name": "BaseBdev1", 00:10:10.425 "uuid": "ed91540d-30f3-4449-898d-10f8088c7a00", 00:10:10.425 "is_configured": true, 00:10:10.425 "data_offset": 0, 00:10:10.425 "data_size": 65536 00:10:10.425 }, 00:10:10.425 { 00:10:10.425 "name": "BaseBdev2", 00:10:10.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.425 "is_configured": false, 00:10:10.425 "data_offset": 0, 00:10:10.425 "data_size": 0 00:10:10.425 } 00:10:10.425 ] 00:10:10.425 }' 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.425 10:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.989 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:11.246 [2024-07-15 10:17:35.844378] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.246 [2024-07-15 10:17:35.844408] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x173a810 name Existed_Raid, state configuring 00:10:11.246 10:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:11.246 [2024-07-15 10:17:36.024860] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.246 [2024-07-15 10:17:36.025964] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.246 [2024-07-15 10:17:36.025993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.504 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:11.504 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:11.504 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:11.504 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:11.504 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:11.505 "name": "Existed_Raid", 00:10:11.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.505 "strip_size_kb": 64, 00:10:11.505 "state": "configuring", 00:10:11.505 "raid_level": "concat", 00:10:11.505 "superblock": false, 00:10:11.505 "num_base_bdevs": 2, 00:10:11.505 "num_base_bdevs_discovered": 1, 00:10:11.505 "num_base_bdevs_operational": 2, 00:10:11.505 "base_bdevs_list": [ 00:10:11.505 { 00:10:11.505 "name": "BaseBdev1", 00:10:11.505 "uuid": "ed91540d-30f3-4449-898d-10f8088c7a00", 00:10:11.505 "is_configured": true, 00:10:11.505 "data_offset": 0, 00:10:11.505 "data_size": 65536 00:10:11.505 }, 00:10:11.505 { 00:10:11.505 "name": "BaseBdev2", 00:10:11.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.505 "is_configured": false, 00:10:11.505 "data_offset": 0, 00:10:11.505 "data_size": 0 00:10:11.505 } 00:10:11.505 ] 00:10:11.505 }' 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:11.505 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.070 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.364 [2024-07-15 10:17:36.865744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.364 [2024-07-15 10:17:36.865773] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x173b600 00:10:12.364 [2024-07-15 10:17:36.865779] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:12.364 [2024-07-15 10:17:36.865924] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17340e0 00:10:12.364 [2024-07-15 10:17:36.866013] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x173b600 00:10:12.364 [2024-07-15 10:17:36.866020] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x173b600 00:10:12.364 [2024-07-15 10:17:36.866142] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.364 BaseBdev2 00:10:12.364 10:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:12.364 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:12.364 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:12.365 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:12.365 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:12.365 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:12.365 10:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:12.365 10:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.623 [ 00:10:12.623 { 00:10:12.623 "name": "BaseBdev2", 00:10:12.623 "aliases": [ 00:10:12.623 "21bfe4ba-3337-4677-86b0-3d8207517f04" 00:10:12.623 ], 00:10:12.623 "product_name": "Malloc disk", 00:10:12.623 "block_size": 512, 00:10:12.623 "num_blocks": 65536, 00:10:12.623 "uuid": "21bfe4ba-3337-4677-86b0-3d8207517f04", 00:10:12.623 "assigned_rate_limits": { 00:10:12.623 "rw_ios_per_sec": 0, 00:10:12.623 "rw_mbytes_per_sec": 0, 00:10:12.623 "r_mbytes_per_sec": 0, 00:10:12.623 "w_mbytes_per_sec": 0 00:10:12.623 }, 00:10:12.623 "claimed": true, 00:10:12.623 "claim_type": "exclusive_write", 00:10:12.623 "zoned": false, 00:10:12.623 "supported_io_types": { 00:10:12.623 "read": true, 00:10:12.623 "write": true, 00:10:12.623 "unmap": true, 00:10:12.623 "flush": true, 00:10:12.623 "reset": true, 00:10:12.623 "nvme_admin": false, 00:10:12.623 "nvme_io": false, 00:10:12.623 "nvme_io_md": false, 00:10:12.623 "write_zeroes": true, 00:10:12.623 "zcopy": true, 00:10:12.623 "get_zone_info": false, 00:10:12.623 "zone_management": false, 00:10:12.623 "zone_append": false, 00:10:12.623 "compare": false, 00:10:12.623 "compare_and_write": false, 00:10:12.623 "abort": true, 00:10:12.623 "seek_hole": false, 00:10:12.623 "seek_data": false, 00:10:12.623 "copy": true, 00:10:12.623 "nvme_iov_md": false 00:10:12.623 }, 00:10:12.623 "memory_domains": [ 00:10:12.623 { 00:10:12.623 "dma_device_id": "system", 00:10:12.623 "dma_device_type": 1 00:10:12.623 }, 00:10:12.623 { 00:10:12.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.623 "dma_device_type": 2 00:10:12.623 } 00:10:12.623 ], 00:10:12.623 "driver_specific": {} 00:10:12.623 } 00:10:12.623 ] 00:10:12.623 10:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:12.623 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.624 "name": "Existed_Raid", 00:10:12.624 "uuid": "c51e20db-d352-4973-aa94-b9c68d85e91f", 00:10:12.624 "strip_size_kb": 64, 00:10:12.624 "state": "online", 00:10:12.624 "raid_level": "concat", 00:10:12.624 "superblock": false, 00:10:12.624 "num_base_bdevs": 2, 00:10:12.624 "num_base_bdevs_discovered": 2, 00:10:12.624 "num_base_bdevs_operational": 2, 00:10:12.624 "base_bdevs_list": [ 00:10:12.624 { 00:10:12.624 "name": "BaseBdev1", 00:10:12.624 "uuid": "ed91540d-30f3-4449-898d-10f8088c7a00", 00:10:12.624 "is_configured": true, 00:10:12.624 "data_offset": 0, 00:10:12.624 "data_size": 65536 00:10:12.624 }, 00:10:12.624 { 00:10:12.624 "name": "BaseBdev2", 00:10:12.624 "uuid": "21bfe4ba-3337-4677-86b0-3d8207517f04", 00:10:12.624 "is_configured": true, 00:10:12.624 "data_offset": 0, 00:10:12.624 "data_size": 65536 00:10:12.624 } 00:10:12.624 ] 00:10:12.624 }' 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.624 10:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:13.191 10:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:13.450 [2024-07-15 10:17:38.049068] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.450 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:13.450 "name": "Existed_Raid", 00:10:13.450 "aliases": [ 00:10:13.450 "c51e20db-d352-4973-aa94-b9c68d85e91f" 00:10:13.450 ], 00:10:13.450 "product_name": "Raid Volume", 00:10:13.450 "block_size": 512, 00:10:13.450 "num_blocks": 131072, 00:10:13.450 "uuid": "c51e20db-d352-4973-aa94-b9c68d85e91f", 00:10:13.450 "assigned_rate_limits": { 00:10:13.450 "rw_ios_per_sec": 0, 00:10:13.450 "rw_mbytes_per_sec": 0, 00:10:13.450 "r_mbytes_per_sec": 0, 00:10:13.450 "w_mbytes_per_sec": 0 00:10:13.450 }, 00:10:13.450 "claimed": false, 00:10:13.450 "zoned": false, 00:10:13.450 "supported_io_types": { 00:10:13.450 "read": true, 00:10:13.450 "write": true, 00:10:13.450 "unmap": true, 00:10:13.450 "flush": true, 00:10:13.450 "reset": true, 00:10:13.450 "nvme_admin": false, 00:10:13.450 "nvme_io": false, 00:10:13.450 "nvme_io_md": false, 00:10:13.450 "write_zeroes": true, 00:10:13.450 "zcopy": false, 00:10:13.450 "get_zone_info": false, 00:10:13.450 "zone_management": false, 00:10:13.450 "zone_append": false, 00:10:13.450 "compare": false, 00:10:13.450 "compare_and_write": false, 00:10:13.450 "abort": false, 00:10:13.450 "seek_hole": false, 00:10:13.450 "seek_data": false, 00:10:13.450 "copy": false, 00:10:13.450 "nvme_iov_md": false 00:10:13.450 }, 00:10:13.450 "memory_domains": [ 00:10:13.450 { 00:10:13.450 "dma_device_id": "system", 00:10:13.450 "dma_device_type": 1 00:10:13.450 }, 00:10:13.450 { 00:10:13.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.450 "dma_device_type": 2 00:10:13.450 }, 00:10:13.450 { 00:10:13.450 "dma_device_id": "system", 00:10:13.450 "dma_device_type": 1 00:10:13.450 }, 00:10:13.450 { 00:10:13.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.450 "dma_device_type": 2 00:10:13.450 } 00:10:13.450 ], 00:10:13.450 "driver_specific": { 00:10:13.450 "raid": { 00:10:13.450 "uuid": "c51e20db-d352-4973-aa94-b9c68d85e91f", 00:10:13.450 "strip_size_kb": 64, 00:10:13.450 "state": "online", 00:10:13.450 "raid_level": "concat", 00:10:13.450 "superblock": false, 00:10:13.450 "num_base_bdevs": 2, 00:10:13.450 "num_base_bdevs_discovered": 2, 00:10:13.450 "num_base_bdevs_operational": 2, 00:10:13.450 "base_bdevs_list": [ 00:10:13.450 { 00:10:13.450 "name": "BaseBdev1", 00:10:13.450 "uuid": "ed91540d-30f3-4449-898d-10f8088c7a00", 00:10:13.450 "is_configured": true, 00:10:13.450 "data_offset": 0, 00:10:13.450 "data_size": 65536 00:10:13.450 }, 00:10:13.450 { 00:10:13.450 "name": "BaseBdev2", 00:10:13.450 "uuid": "21bfe4ba-3337-4677-86b0-3d8207517f04", 00:10:13.450 "is_configured": true, 00:10:13.450 "data_offset": 0, 00:10:13.450 "data_size": 65536 00:10:13.450 } 00:10:13.450 ] 00:10:13.450 } 00:10:13.450 } 00:10:13.450 }' 00:10:13.450 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.450 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:13.450 BaseBdev2' 00:10:13.450 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.450 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:13.450 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.709 "name": "BaseBdev1", 00:10:13.709 "aliases": [ 00:10:13.709 "ed91540d-30f3-4449-898d-10f8088c7a00" 00:10:13.709 ], 00:10:13.709 "product_name": "Malloc disk", 00:10:13.709 "block_size": 512, 00:10:13.709 "num_blocks": 65536, 00:10:13.709 "uuid": "ed91540d-30f3-4449-898d-10f8088c7a00", 00:10:13.709 "assigned_rate_limits": { 00:10:13.709 "rw_ios_per_sec": 0, 00:10:13.709 "rw_mbytes_per_sec": 0, 00:10:13.709 "r_mbytes_per_sec": 0, 00:10:13.709 "w_mbytes_per_sec": 0 00:10:13.709 }, 00:10:13.709 "claimed": true, 00:10:13.709 "claim_type": "exclusive_write", 00:10:13.709 "zoned": false, 00:10:13.709 "supported_io_types": { 00:10:13.709 "read": true, 00:10:13.709 "write": true, 00:10:13.709 "unmap": true, 00:10:13.709 "flush": true, 00:10:13.709 "reset": true, 00:10:13.709 "nvme_admin": false, 00:10:13.709 "nvme_io": false, 00:10:13.709 "nvme_io_md": false, 00:10:13.709 "write_zeroes": true, 00:10:13.709 "zcopy": true, 00:10:13.709 "get_zone_info": false, 00:10:13.709 "zone_management": false, 00:10:13.709 "zone_append": false, 00:10:13.709 "compare": false, 00:10:13.709 "compare_and_write": false, 00:10:13.709 "abort": true, 00:10:13.709 "seek_hole": false, 00:10:13.709 "seek_data": false, 00:10:13.709 "copy": true, 00:10:13.709 "nvme_iov_md": false 00:10:13.709 }, 00:10:13.709 "memory_domains": [ 00:10:13.709 { 00:10:13.709 "dma_device_id": "system", 00:10:13.709 "dma_device_type": 1 00:10:13.709 }, 00:10:13.709 { 00:10:13.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.709 "dma_device_type": 2 00:10:13.709 } 00:10:13.709 ], 00:10:13.709 "driver_specific": {} 00:10:13.709 }' 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.709 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.968 "name": "BaseBdev2", 00:10:13.968 "aliases": [ 00:10:13.968 "21bfe4ba-3337-4677-86b0-3d8207517f04" 00:10:13.968 ], 00:10:13.968 "product_name": "Malloc disk", 00:10:13.968 "block_size": 512, 00:10:13.968 "num_blocks": 65536, 00:10:13.968 "uuid": "21bfe4ba-3337-4677-86b0-3d8207517f04", 00:10:13.968 "assigned_rate_limits": { 00:10:13.968 "rw_ios_per_sec": 0, 00:10:13.968 "rw_mbytes_per_sec": 0, 00:10:13.968 "r_mbytes_per_sec": 0, 00:10:13.968 "w_mbytes_per_sec": 0 00:10:13.968 }, 00:10:13.968 "claimed": true, 00:10:13.968 "claim_type": "exclusive_write", 00:10:13.968 "zoned": false, 00:10:13.968 "supported_io_types": { 00:10:13.968 "read": true, 00:10:13.968 "write": true, 00:10:13.968 "unmap": true, 00:10:13.968 "flush": true, 00:10:13.968 "reset": true, 00:10:13.968 "nvme_admin": false, 00:10:13.968 "nvme_io": false, 00:10:13.968 "nvme_io_md": false, 00:10:13.968 "write_zeroes": true, 00:10:13.968 "zcopy": true, 00:10:13.968 "get_zone_info": false, 00:10:13.968 "zone_management": false, 00:10:13.968 "zone_append": false, 00:10:13.968 "compare": false, 00:10:13.968 "compare_and_write": false, 00:10:13.968 "abort": true, 00:10:13.968 "seek_hole": false, 00:10:13.968 "seek_data": false, 00:10:13.968 "copy": true, 00:10:13.968 "nvme_iov_md": false 00:10:13.968 }, 00:10:13.968 "memory_domains": [ 00:10:13.968 { 00:10:13.968 "dma_device_id": "system", 00:10:13.968 "dma_device_type": 1 00:10:13.968 }, 00:10:13.968 { 00:10:13.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.968 "dma_device_type": 2 00:10:13.968 } 00:10:13.968 ], 00:10:13.968 "driver_specific": {} 00:10:13.968 }' 00:10:13.968 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.226 10:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.226 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.226 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:14.485 [2024-07-15 10:17:39.167822] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.485 [2024-07-15 10:17:39.167847] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.485 [2024-07-15 10:17:39.167875] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.485 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.744 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.744 "name": "Existed_Raid", 00:10:14.744 "uuid": "c51e20db-d352-4973-aa94-b9c68d85e91f", 00:10:14.744 "strip_size_kb": 64, 00:10:14.744 "state": "offline", 00:10:14.744 "raid_level": "concat", 00:10:14.744 "superblock": false, 00:10:14.744 "num_base_bdevs": 2, 00:10:14.744 "num_base_bdevs_discovered": 1, 00:10:14.744 "num_base_bdevs_operational": 1, 00:10:14.744 "base_bdevs_list": [ 00:10:14.744 { 00:10:14.744 "name": null, 00:10:14.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.744 "is_configured": false, 00:10:14.744 "data_offset": 0, 00:10:14.744 "data_size": 65536 00:10:14.744 }, 00:10:14.744 { 00:10:14.744 "name": "BaseBdev2", 00:10:14.744 "uuid": "21bfe4ba-3337-4677-86b0-3d8207517f04", 00:10:14.744 "is_configured": true, 00:10:14.744 "data_offset": 0, 00:10:14.744 "data_size": 65536 00:10:14.744 } 00:10:14.744 ] 00:10:14.744 }' 00:10:14.744 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.744 10:17:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.311 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:15.311 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:15.311 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.311 10:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:15.311 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:15.311 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.311 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:15.570 [2024-07-15 10:17:40.171286] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.570 [2024-07-15 10:17:40.171325] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x173b600 name Existed_Raid, state offline 00:10:15.570 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:15.570 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:15.570 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.570 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1749173 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1749173 ']' 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1749173 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1749173 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1749173' 00:10:15.829 killing process with pid 1749173 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1749173 00:10:15.829 [2024-07-15 10:17:40.441655] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1749173 00:10:15.829 [2024-07-15 10:17:40.442462] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:15.829 00:10:15.829 real 0m8.127s 00:10:15.829 user 0m14.293s 00:10:15.829 sys 0m1.606s 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.829 10:17:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.829 ************************************ 00:10:15.829 END TEST raid_state_function_test 00:10:15.829 ************************************ 00:10:16.088 10:17:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:16.088 10:17:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:16.088 10:17:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:16.088 10:17:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.088 10:17:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 ************************************ 00:10:16.088 START TEST raid_state_function_test_sb 00:10:16.088 ************************************ 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1750934 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1750934' 00:10:16.088 Process raid pid: 1750934 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1750934 /var/tmp/spdk-raid.sock 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1750934 ']' 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:16.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.088 10:17:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 [2024-07-15 10:17:40.738118] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:16.088 [2024-07-15 10:17:40.738162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.088 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:16.088 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:16.089 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.089 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:16.089 [2024-07-15 10:17:40.825925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.346 [2024-07-15 10:17:40.901023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.346 [2024-07-15 10:17:40.950675] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.346 [2024-07-15 10:17:40.950700] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.912 10:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.912 10:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:16.912 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:16.912 [2024-07-15 10:17:41.689670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.912 [2024-07-15 10:17:41.689703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.912 [2024-07-15 10:17:41.689710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.912 [2024-07-15 10:17:41.689718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.170 "name": "Existed_Raid", 00:10:17.170 "uuid": "892b00e4-5e85-4527-9780-54bf16db91da", 00:10:17.170 "strip_size_kb": 64, 00:10:17.170 "state": "configuring", 00:10:17.170 "raid_level": "concat", 00:10:17.170 "superblock": true, 00:10:17.170 "num_base_bdevs": 2, 00:10:17.170 "num_base_bdevs_discovered": 0, 00:10:17.170 "num_base_bdevs_operational": 2, 00:10:17.170 "base_bdevs_list": [ 00:10:17.170 { 00:10:17.170 "name": "BaseBdev1", 00:10:17.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.170 "is_configured": false, 00:10:17.170 "data_offset": 0, 00:10:17.170 "data_size": 0 00:10:17.170 }, 00:10:17.170 { 00:10:17.170 "name": "BaseBdev2", 00:10:17.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.170 "is_configured": false, 00:10:17.170 "data_offset": 0, 00:10:17.170 "data_size": 0 00:10:17.170 } 00:10:17.170 ] 00:10:17.170 }' 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.170 10:17:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.760 10:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:17.760 [2024-07-15 10:17:42.511680] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.760 [2024-07-15 10:17:42.511700] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd89f20 name Existed_Raid, state configuring 00:10:17.760 10:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:18.019 [2024-07-15 10:17:42.680129] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.019 [2024-07-15 10:17:42.680147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.019 [2024-07-15 10:17:42.680153] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.019 [2024-07-15 10:17:42.680160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.019 10:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.277 [2024-07-15 10:17:42.849060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.277 BaseBdev1 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:18.277 10:17:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:18.277 10:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.535 [ 00:10:18.535 { 00:10:18.535 "name": "BaseBdev1", 00:10:18.535 "aliases": [ 00:10:18.535 "a70d30bd-adf4-4e1f-9f46-16513348161d" 00:10:18.535 ], 00:10:18.535 "product_name": "Malloc disk", 00:10:18.535 "block_size": 512, 00:10:18.535 "num_blocks": 65536, 00:10:18.535 "uuid": "a70d30bd-adf4-4e1f-9f46-16513348161d", 00:10:18.535 "assigned_rate_limits": { 00:10:18.535 "rw_ios_per_sec": 0, 00:10:18.535 "rw_mbytes_per_sec": 0, 00:10:18.535 "r_mbytes_per_sec": 0, 00:10:18.535 "w_mbytes_per_sec": 0 00:10:18.535 }, 00:10:18.535 "claimed": true, 00:10:18.535 "claim_type": "exclusive_write", 00:10:18.535 "zoned": false, 00:10:18.535 "supported_io_types": { 00:10:18.535 "read": true, 00:10:18.535 "write": true, 00:10:18.535 "unmap": true, 00:10:18.535 "flush": true, 00:10:18.535 "reset": true, 00:10:18.535 "nvme_admin": false, 00:10:18.535 "nvme_io": false, 00:10:18.535 "nvme_io_md": false, 00:10:18.535 "write_zeroes": true, 00:10:18.535 "zcopy": true, 00:10:18.535 "get_zone_info": false, 00:10:18.535 "zone_management": false, 00:10:18.535 "zone_append": false, 00:10:18.535 "compare": false, 00:10:18.535 "compare_and_write": false, 00:10:18.535 "abort": true, 00:10:18.535 "seek_hole": false, 00:10:18.535 "seek_data": false, 00:10:18.535 "copy": true, 00:10:18.535 "nvme_iov_md": false 00:10:18.535 }, 00:10:18.535 "memory_domains": [ 00:10:18.535 { 00:10:18.535 "dma_device_id": "system", 00:10:18.535 "dma_device_type": 1 00:10:18.535 }, 00:10:18.535 { 00:10:18.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.535 "dma_device_type": 2 00:10:18.535 } 00:10:18.535 ], 00:10:18.535 "driver_specific": {} 00:10:18.535 } 00:10:18.535 ] 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.535 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.793 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.793 "name": "Existed_Raid", 00:10:18.793 "uuid": "72587b00-085f-4a9f-888d-ad8ff37aedc7", 00:10:18.793 "strip_size_kb": 64, 00:10:18.793 "state": "configuring", 00:10:18.793 "raid_level": "concat", 00:10:18.793 "superblock": true, 00:10:18.793 "num_base_bdevs": 2, 00:10:18.793 "num_base_bdevs_discovered": 1, 00:10:18.793 "num_base_bdevs_operational": 2, 00:10:18.793 "base_bdevs_list": [ 00:10:18.793 { 00:10:18.793 "name": "BaseBdev1", 00:10:18.793 "uuid": "a70d30bd-adf4-4e1f-9f46-16513348161d", 00:10:18.793 "is_configured": true, 00:10:18.793 "data_offset": 2048, 00:10:18.793 "data_size": 63488 00:10:18.793 }, 00:10:18.793 { 00:10:18.793 "name": "BaseBdev2", 00:10:18.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.793 "is_configured": false, 00:10:18.793 "data_offset": 0, 00:10:18.793 "data_size": 0 00:10:18.793 } 00:10:18.793 ] 00:10:18.793 }' 00:10:18.793 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.793 10:17:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.357 10:17:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:19.357 [2024-07-15 10:17:44.000036] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.357 [2024-07-15 10:17:44.000075] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd89810 name Existed_Raid, state configuring 00:10:19.357 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:19.615 [2024-07-15 10:17:44.172511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.615 [2024-07-15 10:17:44.173536] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.615 [2024-07-15 10:17:44.173560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.615 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:19.615 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:19.615 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:19.616 "name": "Existed_Raid", 00:10:19.616 "uuid": "92fa3162-2cd7-4507-90ce-357eb6d0eed9", 00:10:19.616 "strip_size_kb": 64, 00:10:19.616 "state": "configuring", 00:10:19.616 "raid_level": "concat", 00:10:19.616 "superblock": true, 00:10:19.616 "num_base_bdevs": 2, 00:10:19.616 "num_base_bdevs_discovered": 1, 00:10:19.616 "num_base_bdevs_operational": 2, 00:10:19.616 "base_bdevs_list": [ 00:10:19.616 { 00:10:19.616 "name": "BaseBdev1", 00:10:19.616 "uuid": "a70d30bd-adf4-4e1f-9f46-16513348161d", 00:10:19.616 "is_configured": true, 00:10:19.616 "data_offset": 2048, 00:10:19.616 "data_size": 63488 00:10:19.616 }, 00:10:19.616 { 00:10:19.616 "name": "BaseBdev2", 00:10:19.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.616 "is_configured": false, 00:10:19.616 "data_offset": 0, 00:10:19.616 "data_size": 0 00:10:19.616 } 00:10:19.616 ] 00:10:19.616 }' 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:19.616 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.180 [2024-07-15 10:17:44.945229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.180 [2024-07-15 10:17:44.945343] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd8a600 00:10:20.180 [2024-07-15 10:17:44.945356] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:20.180 [2024-07-15 10:17:44.945475] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd8b840 00:10:20.180 [2024-07-15 10:17:44.945552] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd8a600 00:10:20.180 [2024-07-15 10:17:44.945559] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xd8a600 00:10:20.180 [2024-07-15 10:17:44.945620] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.180 BaseBdev2 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:20.180 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:20.181 10:17:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.438 10:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.696 [ 00:10:20.696 { 00:10:20.696 "name": "BaseBdev2", 00:10:20.696 "aliases": [ 00:10:20.696 "8b27c2cd-9cc2-43be-b617-fa7383930c81" 00:10:20.696 ], 00:10:20.696 "product_name": "Malloc disk", 00:10:20.696 "block_size": 512, 00:10:20.696 "num_blocks": 65536, 00:10:20.696 "uuid": "8b27c2cd-9cc2-43be-b617-fa7383930c81", 00:10:20.696 "assigned_rate_limits": { 00:10:20.696 "rw_ios_per_sec": 0, 00:10:20.696 "rw_mbytes_per_sec": 0, 00:10:20.696 "r_mbytes_per_sec": 0, 00:10:20.696 "w_mbytes_per_sec": 0 00:10:20.696 }, 00:10:20.696 "claimed": true, 00:10:20.696 "claim_type": "exclusive_write", 00:10:20.696 "zoned": false, 00:10:20.696 "supported_io_types": { 00:10:20.696 "read": true, 00:10:20.696 "write": true, 00:10:20.696 "unmap": true, 00:10:20.696 "flush": true, 00:10:20.696 "reset": true, 00:10:20.696 "nvme_admin": false, 00:10:20.696 "nvme_io": false, 00:10:20.696 "nvme_io_md": false, 00:10:20.696 "write_zeroes": true, 00:10:20.696 "zcopy": true, 00:10:20.696 "get_zone_info": false, 00:10:20.696 "zone_management": false, 00:10:20.696 "zone_append": false, 00:10:20.696 "compare": false, 00:10:20.696 "compare_and_write": false, 00:10:20.697 "abort": true, 00:10:20.697 "seek_hole": false, 00:10:20.697 "seek_data": false, 00:10:20.697 "copy": true, 00:10:20.697 "nvme_iov_md": false 00:10:20.697 }, 00:10:20.697 "memory_domains": [ 00:10:20.697 { 00:10:20.697 "dma_device_id": "system", 00:10:20.697 "dma_device_type": 1 00:10:20.697 }, 00:10:20.697 { 00:10:20.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.697 "dma_device_type": 2 00:10:20.697 } 00:10:20.697 ], 00:10:20.697 "driver_specific": {} 00:10:20.697 } 00:10:20.697 ] 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.697 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.954 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.954 "name": "Existed_Raid", 00:10:20.954 "uuid": "92fa3162-2cd7-4507-90ce-357eb6d0eed9", 00:10:20.954 "strip_size_kb": 64, 00:10:20.954 "state": "online", 00:10:20.954 "raid_level": "concat", 00:10:20.954 "superblock": true, 00:10:20.954 "num_base_bdevs": 2, 00:10:20.954 "num_base_bdevs_discovered": 2, 00:10:20.954 "num_base_bdevs_operational": 2, 00:10:20.954 "base_bdevs_list": [ 00:10:20.954 { 00:10:20.954 "name": "BaseBdev1", 00:10:20.954 "uuid": "a70d30bd-adf4-4e1f-9f46-16513348161d", 00:10:20.954 "is_configured": true, 00:10:20.954 "data_offset": 2048, 00:10:20.954 "data_size": 63488 00:10:20.954 }, 00:10:20.954 { 00:10:20.954 "name": "BaseBdev2", 00:10:20.954 "uuid": "8b27c2cd-9cc2-43be-b617-fa7383930c81", 00:10:20.954 "is_configured": true, 00:10:20.954 "data_offset": 2048, 00:10:20.954 "data_size": 63488 00:10:20.954 } 00:10:20.954 ] 00:10:20.954 }' 00:10:20.954 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.954 10:17:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:21.212 10:17:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:21.470 [2024-07-15 10:17:46.088341] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.470 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:21.470 "name": "Existed_Raid", 00:10:21.470 "aliases": [ 00:10:21.470 "92fa3162-2cd7-4507-90ce-357eb6d0eed9" 00:10:21.470 ], 00:10:21.470 "product_name": "Raid Volume", 00:10:21.470 "block_size": 512, 00:10:21.470 "num_blocks": 126976, 00:10:21.470 "uuid": "92fa3162-2cd7-4507-90ce-357eb6d0eed9", 00:10:21.470 "assigned_rate_limits": { 00:10:21.470 "rw_ios_per_sec": 0, 00:10:21.470 "rw_mbytes_per_sec": 0, 00:10:21.470 "r_mbytes_per_sec": 0, 00:10:21.470 "w_mbytes_per_sec": 0 00:10:21.470 }, 00:10:21.470 "claimed": false, 00:10:21.470 "zoned": false, 00:10:21.470 "supported_io_types": { 00:10:21.470 "read": true, 00:10:21.470 "write": true, 00:10:21.470 "unmap": true, 00:10:21.470 "flush": true, 00:10:21.470 "reset": true, 00:10:21.470 "nvme_admin": false, 00:10:21.470 "nvme_io": false, 00:10:21.470 "nvme_io_md": false, 00:10:21.470 "write_zeroes": true, 00:10:21.470 "zcopy": false, 00:10:21.470 "get_zone_info": false, 00:10:21.470 "zone_management": false, 00:10:21.470 "zone_append": false, 00:10:21.470 "compare": false, 00:10:21.470 "compare_and_write": false, 00:10:21.470 "abort": false, 00:10:21.470 "seek_hole": false, 00:10:21.470 "seek_data": false, 00:10:21.470 "copy": false, 00:10:21.470 "nvme_iov_md": false 00:10:21.470 }, 00:10:21.470 "memory_domains": [ 00:10:21.470 { 00:10:21.470 "dma_device_id": "system", 00:10:21.470 "dma_device_type": 1 00:10:21.470 }, 00:10:21.470 { 00:10:21.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.470 "dma_device_type": 2 00:10:21.470 }, 00:10:21.470 { 00:10:21.470 "dma_device_id": "system", 00:10:21.470 "dma_device_type": 1 00:10:21.470 }, 00:10:21.470 { 00:10:21.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.470 "dma_device_type": 2 00:10:21.470 } 00:10:21.470 ], 00:10:21.470 "driver_specific": { 00:10:21.470 "raid": { 00:10:21.470 "uuid": "92fa3162-2cd7-4507-90ce-357eb6d0eed9", 00:10:21.470 "strip_size_kb": 64, 00:10:21.470 "state": "online", 00:10:21.470 "raid_level": "concat", 00:10:21.470 "superblock": true, 00:10:21.470 "num_base_bdevs": 2, 00:10:21.470 "num_base_bdevs_discovered": 2, 00:10:21.470 "num_base_bdevs_operational": 2, 00:10:21.470 "base_bdevs_list": [ 00:10:21.470 { 00:10:21.470 "name": "BaseBdev1", 00:10:21.470 "uuid": "a70d30bd-adf4-4e1f-9f46-16513348161d", 00:10:21.470 "is_configured": true, 00:10:21.470 "data_offset": 2048, 00:10:21.470 "data_size": 63488 00:10:21.470 }, 00:10:21.470 { 00:10:21.470 "name": "BaseBdev2", 00:10:21.470 "uuid": "8b27c2cd-9cc2-43be-b617-fa7383930c81", 00:10:21.470 "is_configured": true, 00:10:21.470 "data_offset": 2048, 00:10:21.470 "data_size": 63488 00:10:21.470 } 00:10:21.470 ] 00:10:21.470 } 00:10:21.470 } 00:10:21.470 }' 00:10:21.470 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.470 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:21.470 BaseBdev2' 00:10:21.470 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.470 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:21.470 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:21.728 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:21.728 "name": "BaseBdev1", 00:10:21.728 "aliases": [ 00:10:21.728 "a70d30bd-adf4-4e1f-9f46-16513348161d" 00:10:21.728 ], 00:10:21.728 "product_name": "Malloc disk", 00:10:21.728 "block_size": 512, 00:10:21.728 "num_blocks": 65536, 00:10:21.728 "uuid": "a70d30bd-adf4-4e1f-9f46-16513348161d", 00:10:21.728 "assigned_rate_limits": { 00:10:21.728 "rw_ios_per_sec": 0, 00:10:21.728 "rw_mbytes_per_sec": 0, 00:10:21.728 "r_mbytes_per_sec": 0, 00:10:21.728 "w_mbytes_per_sec": 0 00:10:21.728 }, 00:10:21.728 "claimed": true, 00:10:21.728 "claim_type": "exclusive_write", 00:10:21.728 "zoned": false, 00:10:21.728 "supported_io_types": { 00:10:21.728 "read": true, 00:10:21.728 "write": true, 00:10:21.728 "unmap": true, 00:10:21.728 "flush": true, 00:10:21.728 "reset": true, 00:10:21.728 "nvme_admin": false, 00:10:21.728 "nvme_io": false, 00:10:21.728 "nvme_io_md": false, 00:10:21.728 "write_zeroes": true, 00:10:21.728 "zcopy": true, 00:10:21.728 "get_zone_info": false, 00:10:21.728 "zone_management": false, 00:10:21.728 "zone_append": false, 00:10:21.728 "compare": false, 00:10:21.728 "compare_and_write": false, 00:10:21.728 "abort": true, 00:10:21.728 "seek_hole": false, 00:10:21.728 "seek_data": false, 00:10:21.729 "copy": true, 00:10:21.729 "nvme_iov_md": false 00:10:21.729 }, 00:10:21.729 "memory_domains": [ 00:10:21.729 { 00:10:21.729 "dma_device_id": "system", 00:10:21.729 "dma_device_type": 1 00:10:21.729 }, 00:10:21.729 { 00:10:21.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.729 "dma_device_type": 2 00:10:21.729 } 00:10:21.729 ], 00:10:21.729 "driver_specific": {} 00:10:21.729 }' 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.729 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:21.986 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.244 "name": "BaseBdev2", 00:10:22.244 "aliases": [ 00:10:22.244 "8b27c2cd-9cc2-43be-b617-fa7383930c81" 00:10:22.244 ], 00:10:22.244 "product_name": "Malloc disk", 00:10:22.244 "block_size": 512, 00:10:22.244 "num_blocks": 65536, 00:10:22.244 "uuid": "8b27c2cd-9cc2-43be-b617-fa7383930c81", 00:10:22.244 "assigned_rate_limits": { 00:10:22.244 "rw_ios_per_sec": 0, 00:10:22.244 "rw_mbytes_per_sec": 0, 00:10:22.244 "r_mbytes_per_sec": 0, 00:10:22.244 "w_mbytes_per_sec": 0 00:10:22.244 }, 00:10:22.244 "claimed": true, 00:10:22.244 "claim_type": "exclusive_write", 00:10:22.244 "zoned": false, 00:10:22.244 "supported_io_types": { 00:10:22.244 "read": true, 00:10:22.244 "write": true, 00:10:22.244 "unmap": true, 00:10:22.244 "flush": true, 00:10:22.244 "reset": true, 00:10:22.244 "nvme_admin": false, 00:10:22.244 "nvme_io": false, 00:10:22.244 "nvme_io_md": false, 00:10:22.244 "write_zeroes": true, 00:10:22.244 "zcopy": true, 00:10:22.244 "get_zone_info": false, 00:10:22.244 "zone_management": false, 00:10:22.244 "zone_append": false, 00:10:22.244 "compare": false, 00:10:22.244 "compare_and_write": false, 00:10:22.244 "abort": true, 00:10:22.244 "seek_hole": false, 00:10:22.244 "seek_data": false, 00:10:22.244 "copy": true, 00:10:22.244 "nvme_iov_md": false 00:10:22.244 }, 00:10:22.244 "memory_domains": [ 00:10:22.244 { 00:10:22.244 "dma_device_id": "system", 00:10:22.244 "dma_device_type": 1 00:10:22.244 }, 00:10:22.244 { 00:10:22.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.244 "dma_device_type": 2 00:10:22.244 } 00:10:22.244 ], 00:10:22.244 "driver_specific": {} 00:10:22.244 }' 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.244 10:17:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.244 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.244 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:22.501 [2024-07-15 10:17:47.263215] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.501 [2024-07-15 10:17:47.263234] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.501 [2024-07-15 10:17:47.263260] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.501 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.502 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.502 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.758 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.758 "name": "Existed_Raid", 00:10:22.758 "uuid": "92fa3162-2cd7-4507-90ce-357eb6d0eed9", 00:10:22.758 "strip_size_kb": 64, 00:10:22.758 "state": "offline", 00:10:22.758 "raid_level": "concat", 00:10:22.758 "superblock": true, 00:10:22.758 "num_base_bdevs": 2, 00:10:22.758 "num_base_bdevs_discovered": 1, 00:10:22.758 "num_base_bdevs_operational": 1, 00:10:22.758 "base_bdevs_list": [ 00:10:22.758 { 00:10:22.758 "name": null, 00:10:22.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.758 "is_configured": false, 00:10:22.758 "data_offset": 2048, 00:10:22.758 "data_size": 63488 00:10:22.758 }, 00:10:22.758 { 00:10:22.758 "name": "BaseBdev2", 00:10:22.758 "uuid": "8b27c2cd-9cc2-43be-b617-fa7383930c81", 00:10:22.758 "is_configured": true, 00:10:22.758 "data_offset": 2048, 00:10:22.758 "data_size": 63488 00:10:22.758 } 00:10:22.758 ] 00:10:22.758 }' 00:10:22.758 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.758 10:17:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.322 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:23.322 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:23.322 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:23.322 10:17:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.322 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:23.322 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:23.322 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:23.579 [2024-07-15 10:17:48.258621] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.579 [2024-07-15 10:17:48.258655] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd8a600 name Existed_Raid, state offline 00:10:23.579 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:23.579 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:23.579 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.579 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1750934 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1750934 ']' 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1750934 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1750934 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1750934' 00:10:23.837 killing process with pid 1750934 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1750934 00:10:23.837 [2024-07-15 10:17:48.512019] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.837 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1750934 00:10:23.837 [2024-07-15 10:17:48.512804] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.096 10:17:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:24.096 00:10:24.096 real 0m7.992s 00:10:24.096 user 0m14.011s 00:10:24.096 sys 0m1.597s 00:10:24.096 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.096 10:17:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.096 ************************************ 00:10:24.096 END TEST raid_state_function_test_sb 00:10:24.096 ************************************ 00:10:24.096 10:17:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:24.096 10:17:48 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:24.096 10:17:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:24.096 10:17:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.096 10:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.096 ************************************ 00:10:24.096 START TEST raid_superblock_test 00:10:24.096 ************************************ 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1752509 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1752509 /var/tmp/spdk-raid.sock 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1752509 ']' 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:24.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.096 10:17:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.096 [2024-07-15 10:17:48.821268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:24.096 [2024-07-15 10:17:48.821315] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752509 ] 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:24.096 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:24.354 [2024-07-15 10:17:48.912082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.354 [2024-07-15 10:17:48.985851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.354 [2024-07-15 10:17:49.039910] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.354 [2024-07-15 10:17:49.039937] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.920 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:25.198 malloc1 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.198 [2024-07-15 10:17:49.927635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.198 [2024-07-15 10:17:49.927670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.198 [2024-07-15 10:17:49.927685] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9d02f0 00:10:25.198 [2024-07-15 10:17:49.927694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.198 [2024-07-15 10:17:49.928781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.198 [2024-07-15 10:17:49.928804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.198 pt1 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:25.198 10:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:25.454 malloc2 00:10:25.454 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.712 [2024-07-15 10:17:50.264284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.712 [2024-07-15 10:17:50.264316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.712 [2024-07-15 10:17:50.264330] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9d16d0 00:10:25.712 [2024-07-15 10:17:50.264338] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.712 [2024-07-15 10:17:50.265388] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.712 [2024-07-15 10:17:50.265410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.712 pt2 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:25.712 [2024-07-15 10:17:50.432733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.712 [2024-07-15 10:17:50.433560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.712 [2024-07-15 10:17:50.433652] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb6a310 00:10:25.712 [2024-07-15 10:17:50.433660] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:25.712 [2024-07-15 10:17:50.433784] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb69ce0 00:10:25.712 [2024-07-15 10:17:50.433874] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb6a310 00:10:25.712 [2024-07-15 10:17:50.433880] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb6a310 00:10:25.712 [2024-07-15 10:17:50.433965] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.712 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.969 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.969 "name": "raid_bdev1", 00:10:25.969 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:25.969 "strip_size_kb": 64, 00:10:25.969 "state": "online", 00:10:25.969 "raid_level": "concat", 00:10:25.969 "superblock": true, 00:10:25.969 "num_base_bdevs": 2, 00:10:25.969 "num_base_bdevs_discovered": 2, 00:10:25.969 "num_base_bdevs_operational": 2, 00:10:25.969 "base_bdevs_list": [ 00:10:25.969 { 00:10:25.969 "name": "pt1", 00:10:25.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.969 "is_configured": true, 00:10:25.969 "data_offset": 2048, 00:10:25.969 "data_size": 63488 00:10:25.969 }, 00:10:25.969 { 00:10:25.969 "name": "pt2", 00:10:25.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.969 "is_configured": true, 00:10:25.969 "data_offset": 2048, 00:10:25.969 "data_size": 63488 00:10:25.969 } 00:10:25.969 ] 00:10:25.969 }' 00:10:25.969 10:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.969 10:17:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:26.533 [2024-07-15 10:17:51.254981] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.533 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:26.533 "name": "raid_bdev1", 00:10:26.533 "aliases": [ 00:10:26.533 "c779d7c3-02a1-4f8c-bc65-4cd977eefead" 00:10:26.533 ], 00:10:26.533 "product_name": "Raid Volume", 00:10:26.533 "block_size": 512, 00:10:26.533 "num_blocks": 126976, 00:10:26.533 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:26.533 "assigned_rate_limits": { 00:10:26.533 "rw_ios_per_sec": 0, 00:10:26.533 "rw_mbytes_per_sec": 0, 00:10:26.533 "r_mbytes_per_sec": 0, 00:10:26.533 "w_mbytes_per_sec": 0 00:10:26.533 }, 00:10:26.533 "claimed": false, 00:10:26.533 "zoned": false, 00:10:26.533 "supported_io_types": { 00:10:26.533 "read": true, 00:10:26.533 "write": true, 00:10:26.533 "unmap": true, 00:10:26.533 "flush": true, 00:10:26.533 "reset": true, 00:10:26.533 "nvme_admin": false, 00:10:26.533 "nvme_io": false, 00:10:26.533 "nvme_io_md": false, 00:10:26.533 "write_zeroes": true, 00:10:26.533 "zcopy": false, 00:10:26.533 "get_zone_info": false, 00:10:26.533 "zone_management": false, 00:10:26.533 "zone_append": false, 00:10:26.533 "compare": false, 00:10:26.533 "compare_and_write": false, 00:10:26.533 "abort": false, 00:10:26.533 "seek_hole": false, 00:10:26.533 "seek_data": false, 00:10:26.533 "copy": false, 00:10:26.533 "nvme_iov_md": false 00:10:26.533 }, 00:10:26.533 "memory_domains": [ 00:10:26.533 { 00:10:26.533 "dma_device_id": "system", 00:10:26.533 "dma_device_type": 1 00:10:26.533 }, 00:10:26.533 { 00:10:26.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.533 "dma_device_type": 2 00:10:26.533 }, 00:10:26.533 { 00:10:26.533 "dma_device_id": "system", 00:10:26.533 "dma_device_type": 1 00:10:26.533 }, 00:10:26.533 { 00:10:26.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.533 "dma_device_type": 2 00:10:26.533 } 00:10:26.533 ], 00:10:26.533 "driver_specific": { 00:10:26.533 "raid": { 00:10:26.533 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:26.533 "strip_size_kb": 64, 00:10:26.533 "state": "online", 00:10:26.533 "raid_level": "concat", 00:10:26.533 "superblock": true, 00:10:26.533 "num_base_bdevs": 2, 00:10:26.533 "num_base_bdevs_discovered": 2, 00:10:26.533 "num_base_bdevs_operational": 2, 00:10:26.533 "base_bdevs_list": [ 00:10:26.534 { 00:10:26.534 "name": "pt1", 00:10:26.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.534 "is_configured": true, 00:10:26.534 "data_offset": 2048, 00:10:26.534 "data_size": 63488 00:10:26.534 }, 00:10:26.534 { 00:10:26.534 "name": "pt2", 00:10:26.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.534 "is_configured": true, 00:10:26.534 "data_offset": 2048, 00:10:26.534 "data_size": 63488 00:10:26.534 } 00:10:26.534 ] 00:10:26.534 } 00:10:26.534 } 00:10:26.534 }' 00:10:26.534 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.534 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:26.534 pt2' 00:10:26.534 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:26.791 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:26.791 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:26.791 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:26.791 "name": "pt1", 00:10:26.791 "aliases": [ 00:10:26.791 "00000000-0000-0000-0000-000000000001" 00:10:26.791 ], 00:10:26.791 "product_name": "passthru", 00:10:26.791 "block_size": 512, 00:10:26.791 "num_blocks": 65536, 00:10:26.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.791 "assigned_rate_limits": { 00:10:26.791 "rw_ios_per_sec": 0, 00:10:26.791 "rw_mbytes_per_sec": 0, 00:10:26.791 "r_mbytes_per_sec": 0, 00:10:26.791 "w_mbytes_per_sec": 0 00:10:26.791 }, 00:10:26.791 "claimed": true, 00:10:26.791 "claim_type": "exclusive_write", 00:10:26.791 "zoned": false, 00:10:26.791 "supported_io_types": { 00:10:26.791 "read": true, 00:10:26.791 "write": true, 00:10:26.791 "unmap": true, 00:10:26.791 "flush": true, 00:10:26.791 "reset": true, 00:10:26.791 "nvme_admin": false, 00:10:26.791 "nvme_io": false, 00:10:26.791 "nvme_io_md": false, 00:10:26.791 "write_zeroes": true, 00:10:26.791 "zcopy": true, 00:10:26.791 "get_zone_info": false, 00:10:26.791 "zone_management": false, 00:10:26.791 "zone_append": false, 00:10:26.791 "compare": false, 00:10:26.791 "compare_and_write": false, 00:10:26.791 "abort": true, 00:10:26.791 "seek_hole": false, 00:10:26.791 "seek_data": false, 00:10:26.791 "copy": true, 00:10:26.791 "nvme_iov_md": false 00:10:26.791 }, 00:10:26.791 "memory_domains": [ 00:10:26.791 { 00:10:26.791 "dma_device_id": "system", 00:10:26.791 "dma_device_type": 1 00:10:26.791 }, 00:10:26.791 { 00:10:26.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.791 "dma_device_type": 2 00:10:26.791 } 00:10:26.791 ], 00:10:26.791 "driver_specific": { 00:10:26.791 "passthru": { 00:10:26.791 "name": "pt1", 00:10:26.791 "base_bdev_name": "malloc1" 00:10:26.791 } 00:10:26.791 } 00:10:26.791 }' 00:10:26.791 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:26.791 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:27.048 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:27.305 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:27.305 "name": "pt2", 00:10:27.305 "aliases": [ 00:10:27.305 "00000000-0000-0000-0000-000000000002" 00:10:27.305 ], 00:10:27.305 "product_name": "passthru", 00:10:27.305 "block_size": 512, 00:10:27.305 "num_blocks": 65536, 00:10:27.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.305 "assigned_rate_limits": { 00:10:27.305 "rw_ios_per_sec": 0, 00:10:27.305 "rw_mbytes_per_sec": 0, 00:10:27.305 "r_mbytes_per_sec": 0, 00:10:27.305 "w_mbytes_per_sec": 0 00:10:27.305 }, 00:10:27.305 "claimed": true, 00:10:27.305 "claim_type": "exclusive_write", 00:10:27.305 "zoned": false, 00:10:27.305 "supported_io_types": { 00:10:27.305 "read": true, 00:10:27.305 "write": true, 00:10:27.305 "unmap": true, 00:10:27.305 "flush": true, 00:10:27.305 "reset": true, 00:10:27.305 "nvme_admin": false, 00:10:27.305 "nvme_io": false, 00:10:27.305 "nvme_io_md": false, 00:10:27.305 "write_zeroes": true, 00:10:27.305 "zcopy": true, 00:10:27.306 "get_zone_info": false, 00:10:27.306 "zone_management": false, 00:10:27.306 "zone_append": false, 00:10:27.306 "compare": false, 00:10:27.306 "compare_and_write": false, 00:10:27.306 "abort": true, 00:10:27.306 "seek_hole": false, 00:10:27.306 "seek_data": false, 00:10:27.306 "copy": true, 00:10:27.306 "nvme_iov_md": false 00:10:27.306 }, 00:10:27.306 "memory_domains": [ 00:10:27.306 { 00:10:27.306 "dma_device_id": "system", 00:10:27.306 "dma_device_type": 1 00:10:27.306 }, 00:10:27.306 { 00:10:27.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.306 "dma_device_type": 2 00:10:27.306 } 00:10:27.306 ], 00:10:27.306 "driver_specific": { 00:10:27.306 "passthru": { 00:10:27.306 "name": "pt2", 00:10:27.306 "base_bdev_name": "malloc2" 00:10:27.306 } 00:10:27.306 } 00:10:27.306 }' 00:10:27.306 10:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.306 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:27.306 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:27.306 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:27.563 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:27.820 [2024-07-15 10:17:52.458073] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.820 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c779d7c3-02a1-4f8c-bc65-4cd977eefead 00:10:27.820 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c779d7c3-02a1-4f8c-bc65-4cd977eefead ']' 00:10:27.820 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:28.077 [2024-07-15 10:17:52.614320] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.077 [2024-07-15 10:17:52.614331] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.077 [2024-07-15 10:17:52.614365] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.077 [2024-07-15 10:17:52.614395] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.078 [2024-07-15 10:17:52.614403] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb6a310 name raid_bdev1, state offline 00:10:28.078 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.078 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:28.078 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:28.078 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:28.078 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:28.078 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:28.335 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:28.335 10:17:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:28.335 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:28.335 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:28.593 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:28.850 [2024-07-15 10:17:53.448453] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:28.850 [2024-07-15 10:17:53.449365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:28.850 [2024-07-15 10:17:53.449404] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:28.850 [2024-07-15 10:17:53.449431] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:28.850 [2024-07-15 10:17:53.449443] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.850 [2024-07-15 10:17:53.449449] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb733f0 name raid_bdev1, state configuring 00:10:28.850 request: 00:10:28.850 { 00:10:28.850 "name": "raid_bdev1", 00:10:28.850 "raid_level": "concat", 00:10:28.850 "base_bdevs": [ 00:10:28.850 "malloc1", 00:10:28.850 "malloc2" 00:10:28.850 ], 00:10:28.850 "strip_size_kb": 64, 00:10:28.850 "superblock": false, 00:10:28.850 "method": "bdev_raid_create", 00:10:28.850 "req_id": 1 00:10:28.850 } 00:10:28.850 Got JSON-RPC error response 00:10:28.850 response: 00:10:28.850 { 00:10:28.850 "code": -17, 00:10:28.850 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:28.850 } 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:28.850 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.107 [2024-07-15 10:17:53.789296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.107 [2024-07-15 10:17:53.789323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.107 [2024-07-15 10:17:53.789335] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb73d70 00:10:29.107 [2024-07-15 10:17:53.789344] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.107 [2024-07-15 10:17:53.790445] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.107 [2024-07-15 10:17:53.790468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.107 [2024-07-15 10:17:53.790510] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:29.107 [2024-07-15 10:17:53.790527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.107 pt1 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.107 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.364 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.364 "name": "raid_bdev1", 00:10:29.364 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:29.364 "strip_size_kb": 64, 00:10:29.364 "state": "configuring", 00:10:29.364 "raid_level": "concat", 00:10:29.364 "superblock": true, 00:10:29.364 "num_base_bdevs": 2, 00:10:29.364 "num_base_bdevs_discovered": 1, 00:10:29.364 "num_base_bdevs_operational": 2, 00:10:29.364 "base_bdevs_list": [ 00:10:29.364 { 00:10:29.364 "name": "pt1", 00:10:29.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.364 "is_configured": true, 00:10:29.364 "data_offset": 2048, 00:10:29.364 "data_size": 63488 00:10:29.364 }, 00:10:29.364 { 00:10:29.364 "name": null, 00:10:29.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.364 "is_configured": false, 00:10:29.364 "data_offset": 2048, 00:10:29.364 "data_size": 63488 00:10:29.364 } 00:10:29.364 ] 00:10:29.364 }' 00:10:29.364 10:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.364 10:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.930 [2024-07-15 10:17:54.603411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.930 [2024-07-15 10:17:54.603447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.930 [2024-07-15 10:17:54.603461] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb6abb0 00:10:29.930 [2024-07-15 10:17:54.603469] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.930 [2024-07-15 10:17:54.603702] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.930 [2024-07-15 10:17:54.603714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.930 [2024-07-15 10:17:54.603758] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:29.930 [2024-07-15 10:17:54.603771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.930 [2024-07-15 10:17:54.603835] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb69120 00:10:29.930 [2024-07-15 10:17:54.603842] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:29.930 [2024-07-15 10:17:54.603954] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9c9c20 00:10:29.930 [2024-07-15 10:17:54.604032] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb69120 00:10:29.930 [2024-07-15 10:17:54.604038] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb69120 00:10:29.930 [2024-07-15 10:17:54.604099] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.930 pt2 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.930 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.187 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.187 "name": "raid_bdev1", 00:10:30.187 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:30.187 "strip_size_kb": 64, 00:10:30.187 "state": "online", 00:10:30.187 "raid_level": "concat", 00:10:30.187 "superblock": true, 00:10:30.187 "num_base_bdevs": 2, 00:10:30.187 "num_base_bdevs_discovered": 2, 00:10:30.187 "num_base_bdevs_operational": 2, 00:10:30.187 "base_bdevs_list": [ 00:10:30.187 { 00:10:30.187 "name": "pt1", 00:10:30.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.187 "is_configured": true, 00:10:30.187 "data_offset": 2048, 00:10:30.187 "data_size": 63488 00:10:30.187 }, 00:10:30.187 { 00:10:30.187 "name": "pt2", 00:10:30.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.187 "is_configured": true, 00:10:30.187 "data_offset": 2048, 00:10:30.187 "data_size": 63488 00:10:30.187 } 00:10:30.187 ] 00:10:30.187 }' 00:10:30.187 10:17:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.187 10:17:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:30.751 [2024-07-15 10:17:55.437879] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.751 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:30.751 "name": "raid_bdev1", 00:10:30.751 "aliases": [ 00:10:30.751 "c779d7c3-02a1-4f8c-bc65-4cd977eefead" 00:10:30.751 ], 00:10:30.751 "product_name": "Raid Volume", 00:10:30.751 "block_size": 512, 00:10:30.751 "num_blocks": 126976, 00:10:30.751 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:30.751 "assigned_rate_limits": { 00:10:30.751 "rw_ios_per_sec": 0, 00:10:30.751 "rw_mbytes_per_sec": 0, 00:10:30.751 "r_mbytes_per_sec": 0, 00:10:30.751 "w_mbytes_per_sec": 0 00:10:30.751 }, 00:10:30.751 "claimed": false, 00:10:30.751 "zoned": false, 00:10:30.751 "supported_io_types": { 00:10:30.751 "read": true, 00:10:30.751 "write": true, 00:10:30.751 "unmap": true, 00:10:30.751 "flush": true, 00:10:30.751 "reset": true, 00:10:30.751 "nvme_admin": false, 00:10:30.751 "nvme_io": false, 00:10:30.751 "nvme_io_md": false, 00:10:30.751 "write_zeroes": true, 00:10:30.751 "zcopy": false, 00:10:30.751 "get_zone_info": false, 00:10:30.751 "zone_management": false, 00:10:30.751 "zone_append": false, 00:10:30.751 "compare": false, 00:10:30.751 "compare_and_write": false, 00:10:30.751 "abort": false, 00:10:30.751 "seek_hole": false, 00:10:30.751 "seek_data": false, 00:10:30.751 "copy": false, 00:10:30.751 "nvme_iov_md": false 00:10:30.751 }, 00:10:30.751 "memory_domains": [ 00:10:30.751 { 00:10:30.751 "dma_device_id": "system", 00:10:30.751 "dma_device_type": 1 00:10:30.751 }, 00:10:30.751 { 00:10:30.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.751 "dma_device_type": 2 00:10:30.751 }, 00:10:30.751 { 00:10:30.751 "dma_device_id": "system", 00:10:30.751 "dma_device_type": 1 00:10:30.752 }, 00:10:30.752 { 00:10:30.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.752 "dma_device_type": 2 00:10:30.752 } 00:10:30.752 ], 00:10:30.752 "driver_specific": { 00:10:30.752 "raid": { 00:10:30.752 "uuid": "c779d7c3-02a1-4f8c-bc65-4cd977eefead", 00:10:30.752 "strip_size_kb": 64, 00:10:30.752 "state": "online", 00:10:30.752 "raid_level": "concat", 00:10:30.752 "superblock": true, 00:10:30.752 "num_base_bdevs": 2, 00:10:30.752 "num_base_bdevs_discovered": 2, 00:10:30.752 "num_base_bdevs_operational": 2, 00:10:30.752 "base_bdevs_list": [ 00:10:30.752 { 00:10:30.752 "name": "pt1", 00:10:30.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.752 "is_configured": true, 00:10:30.752 "data_offset": 2048, 00:10:30.752 "data_size": 63488 00:10:30.752 }, 00:10:30.752 { 00:10:30.752 "name": "pt2", 00:10:30.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.752 "is_configured": true, 00:10:30.752 "data_offset": 2048, 00:10:30.752 "data_size": 63488 00:10:30.752 } 00:10:30.752 ] 00:10:30.752 } 00:10:30.752 } 00:10:30.752 }' 00:10:30.752 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.752 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:30.752 pt2' 00:10:30.752 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:30.752 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:30.752 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.009 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.009 "name": "pt1", 00:10:31.009 "aliases": [ 00:10:31.009 "00000000-0000-0000-0000-000000000001" 00:10:31.009 ], 00:10:31.009 "product_name": "passthru", 00:10:31.009 "block_size": 512, 00:10:31.009 "num_blocks": 65536, 00:10:31.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.009 "assigned_rate_limits": { 00:10:31.009 "rw_ios_per_sec": 0, 00:10:31.009 "rw_mbytes_per_sec": 0, 00:10:31.009 "r_mbytes_per_sec": 0, 00:10:31.009 "w_mbytes_per_sec": 0 00:10:31.009 }, 00:10:31.009 "claimed": true, 00:10:31.009 "claim_type": "exclusive_write", 00:10:31.009 "zoned": false, 00:10:31.009 "supported_io_types": { 00:10:31.009 "read": true, 00:10:31.009 "write": true, 00:10:31.009 "unmap": true, 00:10:31.009 "flush": true, 00:10:31.009 "reset": true, 00:10:31.009 "nvme_admin": false, 00:10:31.009 "nvme_io": false, 00:10:31.009 "nvme_io_md": false, 00:10:31.009 "write_zeroes": true, 00:10:31.009 "zcopy": true, 00:10:31.009 "get_zone_info": false, 00:10:31.009 "zone_management": false, 00:10:31.009 "zone_append": false, 00:10:31.009 "compare": false, 00:10:31.009 "compare_and_write": false, 00:10:31.009 "abort": true, 00:10:31.009 "seek_hole": false, 00:10:31.009 "seek_data": false, 00:10:31.009 "copy": true, 00:10:31.009 "nvme_iov_md": false 00:10:31.009 }, 00:10:31.009 "memory_domains": [ 00:10:31.009 { 00:10:31.009 "dma_device_id": "system", 00:10:31.009 "dma_device_type": 1 00:10:31.009 }, 00:10:31.009 { 00:10:31.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.009 "dma_device_type": 2 00:10:31.009 } 00:10:31.009 ], 00:10:31.009 "driver_specific": { 00:10:31.009 "passthru": { 00:10:31.009 "name": "pt1", 00:10:31.009 "base_bdev_name": "malloc1" 00:10:31.009 } 00:10:31.009 } 00:10:31.009 }' 00:10:31.009 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.009 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.009 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.009 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:31.266 10:17:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.524 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.524 "name": "pt2", 00:10:31.524 "aliases": [ 00:10:31.524 "00000000-0000-0000-0000-000000000002" 00:10:31.524 ], 00:10:31.524 "product_name": "passthru", 00:10:31.524 "block_size": 512, 00:10:31.524 "num_blocks": 65536, 00:10:31.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.524 "assigned_rate_limits": { 00:10:31.524 "rw_ios_per_sec": 0, 00:10:31.524 "rw_mbytes_per_sec": 0, 00:10:31.524 "r_mbytes_per_sec": 0, 00:10:31.524 "w_mbytes_per_sec": 0 00:10:31.524 }, 00:10:31.524 "claimed": true, 00:10:31.524 "claim_type": "exclusive_write", 00:10:31.524 "zoned": false, 00:10:31.524 "supported_io_types": { 00:10:31.524 "read": true, 00:10:31.524 "write": true, 00:10:31.524 "unmap": true, 00:10:31.524 "flush": true, 00:10:31.524 "reset": true, 00:10:31.524 "nvme_admin": false, 00:10:31.524 "nvme_io": false, 00:10:31.524 "nvme_io_md": false, 00:10:31.524 "write_zeroes": true, 00:10:31.524 "zcopy": true, 00:10:31.524 "get_zone_info": false, 00:10:31.524 "zone_management": false, 00:10:31.524 "zone_append": false, 00:10:31.524 "compare": false, 00:10:31.524 "compare_and_write": false, 00:10:31.524 "abort": true, 00:10:31.524 "seek_hole": false, 00:10:31.524 "seek_data": false, 00:10:31.524 "copy": true, 00:10:31.524 "nvme_iov_md": false 00:10:31.524 }, 00:10:31.524 "memory_domains": [ 00:10:31.524 { 00:10:31.524 "dma_device_id": "system", 00:10:31.524 "dma_device_type": 1 00:10:31.524 }, 00:10:31.524 { 00:10:31.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.524 "dma_device_type": 2 00:10:31.524 } 00:10:31.524 ], 00:10:31.524 "driver_specific": { 00:10:31.524 "passthru": { 00:10:31.524 "name": "pt2", 00:10:31.524 "base_bdev_name": "malloc2" 00:10:31.524 } 00:10:31.524 } 00:10:31.524 }' 00:10:31.524 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.524 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.524 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.524 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.524 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:31.780 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:32.039 [2024-07-15 10:17:56.641024] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c779d7c3-02a1-4f8c-bc65-4cd977eefead '!=' c779d7c3-02a1-4f8c-bc65-4cd977eefead ']' 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1752509 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1752509 ']' 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1752509 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752509 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752509' 00:10:32.039 killing process with pid 1752509 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1752509 00:10:32.039 [2024-07-15 10:17:56.712040] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.039 [2024-07-15 10:17:56.712078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.039 [2024-07-15 10:17:56.712106] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.039 [2024-07-15 10:17:56.712113] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb69120 name raid_bdev1, state offline 00:10:32.039 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1752509 00:10:32.039 [2024-07-15 10:17:56.726954] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.297 10:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:32.297 00:10:32.297 real 0m8.133s 00:10:32.297 user 0m14.414s 00:10:32.297 sys 0m1.580s 00:10:32.297 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.297 10:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.297 ************************************ 00:10:32.297 END TEST raid_superblock_test 00:10:32.297 ************************************ 00:10:32.297 10:17:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:32.297 10:17:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:32.297 10:17:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:32.297 10:17:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.297 10:17:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.297 ************************************ 00:10:32.297 START TEST raid_read_error_test 00:10:32.297 ************************************ 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.mXzOWkSwQL 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1754065 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1754065 /var/tmp/spdk-raid.sock 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1754065 ']' 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:32.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.297 10:17:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.297 [2024-07-15 10:17:57.028880] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:32.297 [2024-07-15 10:17:57.028929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754065 ] 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.297 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:32.297 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.298 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:32.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.298 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:32.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.298 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:32.555 [2024-07-15 10:17:57.115161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.555 [2024-07-15 10:17:57.189347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.555 [2024-07-15 10:17:57.239940] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.555 [2024-07-15 10:17:57.239988] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.118 10:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.118 10:17:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:33.118 10:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:33.118 10:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:33.375 BaseBdev1_malloc 00:10:33.375 10:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:33.375 true 00:10:33.375 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:33.632 [2024-07-15 10:17:58.307802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:33.632 [2024-07-15 10:17:58.307836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.632 [2024-07-15 10:17:58.307851] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2848190 00:10:33.632 [2024-07-15 10:17:58.307859] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.632 [2024-07-15 10:17:58.309087] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.632 [2024-07-15 10:17:58.309111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:33.632 BaseBdev1 00:10:33.632 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:33.632 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:33.889 BaseBdev2_malloc 00:10:33.889 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:33.889 true 00:10:33.889 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.146 [2024-07-15 10:17:58.788517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.146 [2024-07-15 10:17:58.788550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.146 [2024-07-15 10:17:58.788570] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x284ce20 00:10:34.146 [2024-07-15 10:17:58.788578] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.146 [2024-07-15 10:17:58.789617] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.146 [2024-07-15 10:17:58.789639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.146 BaseBdev2 00:10:34.146 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:34.402 [2024-07-15 10:17:58.956975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.402 [2024-07-15 10:17:58.957815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.402 [2024-07-15 10:17:58.957966] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x284ea50 00:10:34.402 [2024-07-15 10:17:58.957976] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:34.402 [2024-07-15 10:17:58.958104] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x284e2b0 00:10:34.402 [2024-07-15 10:17:58.958203] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x284ea50 00:10:34.402 [2024-07-15 10:17:58.958210] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x284ea50 00:10:34.402 [2024-07-15 10:17:58.958276] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.402 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:34.402 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:34.402 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:34.402 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.402 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.403 10:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.403 10:17:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.403 "name": "raid_bdev1", 00:10:34.403 "uuid": "2430f79f-4557-4955-92bf-a1784775ab27", 00:10:34.403 "strip_size_kb": 64, 00:10:34.403 "state": "online", 00:10:34.403 "raid_level": "concat", 00:10:34.403 "superblock": true, 00:10:34.403 "num_base_bdevs": 2, 00:10:34.403 "num_base_bdevs_discovered": 2, 00:10:34.403 "num_base_bdevs_operational": 2, 00:10:34.403 "base_bdevs_list": [ 00:10:34.403 { 00:10:34.403 "name": "BaseBdev1", 00:10:34.403 "uuid": "3ef0af21-d7c8-5445-ab93-3ef79d45a041", 00:10:34.403 "is_configured": true, 00:10:34.403 "data_offset": 2048, 00:10:34.403 "data_size": 63488 00:10:34.403 }, 00:10:34.403 { 00:10:34.403 "name": "BaseBdev2", 00:10:34.403 "uuid": "dcf5f160-f677-560f-9cf3-810da097750e", 00:10:34.403 "is_configured": true, 00:10:34.403 "data_offset": 2048, 00:10:34.403 "data_size": 63488 00:10:34.403 } 00:10:34.403 ] 00:10:34.403 }' 00:10:34.403 10:17:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.403 10:17:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.967 10:17:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:34.967 10:17:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:34.967 [2024-07-15 10:17:59.703100] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2849b50 00:10:35.899 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.156 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.413 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.413 "name": "raid_bdev1", 00:10:36.413 "uuid": "2430f79f-4557-4955-92bf-a1784775ab27", 00:10:36.413 "strip_size_kb": 64, 00:10:36.413 "state": "online", 00:10:36.413 "raid_level": "concat", 00:10:36.413 "superblock": true, 00:10:36.413 "num_base_bdevs": 2, 00:10:36.413 "num_base_bdevs_discovered": 2, 00:10:36.413 "num_base_bdevs_operational": 2, 00:10:36.413 "base_bdevs_list": [ 00:10:36.413 { 00:10:36.413 "name": "BaseBdev1", 00:10:36.413 "uuid": "3ef0af21-d7c8-5445-ab93-3ef79d45a041", 00:10:36.413 "is_configured": true, 00:10:36.413 "data_offset": 2048, 00:10:36.413 "data_size": 63488 00:10:36.413 }, 00:10:36.413 { 00:10:36.413 "name": "BaseBdev2", 00:10:36.413 "uuid": "dcf5f160-f677-560f-9cf3-810da097750e", 00:10:36.413 "is_configured": true, 00:10:36.413 "data_offset": 2048, 00:10:36.413 "data_size": 63488 00:10:36.413 } 00:10:36.413 ] 00:10:36.413 }' 00:10:36.413 10:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.413 10:18:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:36.982 [2024-07-15 10:18:01.630819] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.982 [2024-07-15 10:18:01.630843] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.982 [2024-07-15 10:18:01.632944] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.982 [2024-07-15 10:18:01.632966] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.982 [2024-07-15 10:18:01.632984] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.982 [2024-07-15 10:18:01.632992] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x284ea50 name raid_bdev1, state offline 00:10:36.982 0 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1754065 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1754065 ']' 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1754065 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1754065 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1754065' 00:10:36.982 killing process with pid 1754065 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1754065 00:10:36.982 [2024-07-15 10:18:01.704970] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.982 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1754065 00:10:36.982 [2024-07-15 10:18:01.714407] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.240 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.mXzOWkSwQL 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:37.241 00:10:37.241 real 0m4.921s 00:10:37.241 user 0m7.412s 00:10:37.241 sys 0m0.845s 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:37.241 10:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.241 ************************************ 00:10:37.241 END TEST raid_read_error_test 00:10:37.241 ************************************ 00:10:37.241 10:18:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:37.241 10:18:01 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:37.241 10:18:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:37.241 10:18:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.241 10:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.241 ************************************ 00:10:37.241 START TEST raid_write_error_test 00:10:37.241 ************************************ 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.OgIlChgcII 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1755090 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1755090 /var/tmp/spdk-raid.sock 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1755090 ']' 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:37.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.241 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:37.499 [2024-07-15 10:18:02.041166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:37.499 [2024-07-15 10:18:02.041214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755090 ] 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:37.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:37.499 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:37.499 [2024-07-15 10:18:02.132895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.499 [2024-07-15 10:18:02.208208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.499 [2024-07-15 10:18:02.261852] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.499 [2024-07-15 10:18:02.261876] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.074 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.074 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:38.074 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:38.074 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:38.370 BaseBdev1_malloc 00:10:38.370 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:38.370 true 00:10:38.370 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:38.637 [2024-07-15 10:18:03.274513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:38.637 [2024-07-15 10:18:03.274549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.637 [2024-07-15 10:18:03.274565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2163190 00:10:38.637 [2024-07-15 10:18:03.274574] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.637 [2024-07-15 10:18:03.275753] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.637 [2024-07-15 10:18:03.275776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:38.637 BaseBdev1 00:10:38.637 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:38.637 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:38.894 BaseBdev2_malloc 00:10:38.894 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:38.894 true 00:10:38.894 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.152 [2024-07-15 10:18:03.755372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.152 [2024-07-15 10:18:03.755407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.152 [2024-07-15 10:18:03.755421] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2167e20 00:10:39.152 [2024-07-15 10:18:03.755430] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.152 [2024-07-15 10:18:03.756509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.152 [2024-07-15 10:18:03.756533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.152 BaseBdev2 00:10:39.152 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:10:39.152 [2024-07-15 10:18:03.927847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.152 [2024-07-15 10:18:03.928807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.152 [2024-07-15 10:18:03.928946] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2169a50 00:10:39.152 [2024-07-15 10:18:03.928956] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:39.152 [2024-07-15 10:18:03.929089] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21692b0 00:10:39.152 [2024-07-15 10:18:03.929191] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2169a50 00:10:39.152 [2024-07-15 10:18:03.929198] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2169a50 00:10:39.152 [2024-07-15 10:18:03.929268] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.410 10:18:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.410 10:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.410 "name": "raid_bdev1", 00:10:39.410 "uuid": "6fd17a1b-a004-4f21-b519-126eb6349ae2", 00:10:39.410 "strip_size_kb": 64, 00:10:39.410 "state": "online", 00:10:39.410 "raid_level": "concat", 00:10:39.410 "superblock": true, 00:10:39.410 "num_base_bdevs": 2, 00:10:39.410 "num_base_bdevs_discovered": 2, 00:10:39.410 "num_base_bdevs_operational": 2, 00:10:39.410 "base_bdevs_list": [ 00:10:39.410 { 00:10:39.410 "name": "BaseBdev1", 00:10:39.410 "uuid": "353f8c91-4fc7-5348-a096-a11c72cc1a53", 00:10:39.410 "is_configured": true, 00:10:39.410 "data_offset": 2048, 00:10:39.410 "data_size": 63488 00:10:39.410 }, 00:10:39.410 { 00:10:39.410 "name": "BaseBdev2", 00:10:39.410 "uuid": "7f57cb68-d9ec-500d-b0b4-1ac27a9c6d19", 00:10:39.410 "is_configured": true, 00:10:39.410 "data_offset": 2048, 00:10:39.410 "data_size": 63488 00:10:39.410 } 00:10:39.410 ] 00:10:39.410 }' 00:10:39.410 10:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.410 10:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.976 10:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:39.976 10:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:39.976 [2024-07-15 10:18:04.650015] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2164b50 00:10:40.911 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.169 "name": "raid_bdev1", 00:10:41.169 "uuid": "6fd17a1b-a004-4f21-b519-126eb6349ae2", 00:10:41.169 "strip_size_kb": 64, 00:10:41.169 "state": "online", 00:10:41.169 "raid_level": "concat", 00:10:41.169 "superblock": true, 00:10:41.169 "num_base_bdevs": 2, 00:10:41.169 "num_base_bdevs_discovered": 2, 00:10:41.169 "num_base_bdevs_operational": 2, 00:10:41.169 "base_bdevs_list": [ 00:10:41.169 { 00:10:41.169 "name": "BaseBdev1", 00:10:41.169 "uuid": "353f8c91-4fc7-5348-a096-a11c72cc1a53", 00:10:41.169 "is_configured": true, 00:10:41.169 "data_offset": 2048, 00:10:41.169 "data_size": 63488 00:10:41.169 }, 00:10:41.169 { 00:10:41.169 "name": "BaseBdev2", 00:10:41.169 "uuid": "7f57cb68-d9ec-500d-b0b4-1ac27a9c6d19", 00:10:41.169 "is_configured": true, 00:10:41.169 "data_offset": 2048, 00:10:41.169 "data_size": 63488 00:10:41.169 } 00:10:41.169 ] 00:10:41.169 }' 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.169 10:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.735 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:41.992 [2024-07-15 10:18:06.589774] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.992 [2024-07-15 10:18:06.589802] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.992 [2024-07-15 10:18:06.591938] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.992 [2024-07-15 10:18:06.591962] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.992 [2024-07-15 10:18:06.591985] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.993 [2024-07-15 10:18:06.591992] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2169a50 name raid_bdev1, state offline 00:10:41.993 0 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1755090 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1755090 ']' 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1755090 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1755090 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1755090' 00:10:41.993 killing process with pid 1755090 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1755090 00:10:41.993 [2024-07-15 10:18:06.659769] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.993 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1755090 00:10:41.993 [2024-07-15 10:18:06.669222] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.OgIlChgcII 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:10:42.251 00:10:42.251 real 0m4.881s 00:10:42.251 user 0m7.328s 00:10:42.251 sys 0m0.823s 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.251 10:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 ************************************ 00:10:42.251 END TEST raid_write_error_test 00:10:42.251 ************************************ 00:10:42.251 10:18:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:42.251 10:18:06 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:42.251 10:18:06 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:42.251 10:18:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:42.251 10:18:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.251 10:18:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.251 ************************************ 00:10:42.251 START TEST raid_state_function_test 00:10:42.251 ************************************ 00:10:42.251 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:10:42.251 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:42.251 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1756551 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1756551' 00:10:42.252 Process raid pid: 1756551 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1756551 /var/tmp/spdk-raid.sock 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1756551 ']' 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:42.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.252 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.252 [2024-07-15 10:18:07.008121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:42.252 [2024-07-15 10:18:07.008169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:42.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.511 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:42.511 [2024-07-15 10:18:07.101176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.511 [2024-07-15 10:18:07.170886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.511 [2024-07-15 10:18:07.223762] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.511 [2024-07-15 10:18:07.223799] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.075 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.075 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:43.075 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:43.333 [2024-07-15 10:18:07.950321] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.333 [2024-07-15 10:18:07.950355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.333 [2024-07-15 10:18:07.950363] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.333 [2024-07-15 10:18:07.950371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.333 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.589 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:43.589 "name": "Existed_Raid", 00:10:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.589 "strip_size_kb": 0, 00:10:43.589 "state": "configuring", 00:10:43.589 "raid_level": "raid1", 00:10:43.589 "superblock": false, 00:10:43.589 "num_base_bdevs": 2, 00:10:43.589 "num_base_bdevs_discovered": 0, 00:10:43.589 "num_base_bdevs_operational": 2, 00:10:43.589 "base_bdevs_list": [ 00:10:43.589 { 00:10:43.589 "name": "BaseBdev1", 00:10:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.589 "is_configured": false, 00:10:43.589 "data_offset": 0, 00:10:43.589 "data_size": 0 00:10:43.589 }, 00:10:43.589 { 00:10:43.589 "name": "BaseBdev2", 00:10:43.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.589 "is_configured": false, 00:10:43.589 "data_offset": 0, 00:10:43.590 "data_size": 0 00:10:43.590 } 00:10:43.590 ] 00:10:43.590 }' 00:10:43.590 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:43.590 10:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.846 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:44.104 [2024-07-15 10:18:08.780357] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.104 [2024-07-15 10:18:08.780380] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1294f20 name Existed_Raid, state configuring 00:10:44.104 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:44.362 [2024-07-15 10:18:08.948801] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.362 [2024-07-15 10:18:08.948823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.362 [2024-07-15 10:18:08.948829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.362 [2024-07-15 10:18:08.948837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.362 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:44.362 [2024-07-15 10:18:09.125791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.362 BaseBdev1 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:44.362 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:44.618 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.876 [ 00:10:44.876 { 00:10:44.876 "name": "BaseBdev1", 00:10:44.876 "aliases": [ 00:10:44.876 "5ff38dac-4943-49bc-afdf-9617f3c6b209" 00:10:44.876 ], 00:10:44.876 "product_name": "Malloc disk", 00:10:44.876 "block_size": 512, 00:10:44.876 "num_blocks": 65536, 00:10:44.876 "uuid": "5ff38dac-4943-49bc-afdf-9617f3c6b209", 00:10:44.876 "assigned_rate_limits": { 00:10:44.876 "rw_ios_per_sec": 0, 00:10:44.876 "rw_mbytes_per_sec": 0, 00:10:44.876 "r_mbytes_per_sec": 0, 00:10:44.876 "w_mbytes_per_sec": 0 00:10:44.876 }, 00:10:44.876 "claimed": true, 00:10:44.876 "claim_type": "exclusive_write", 00:10:44.876 "zoned": false, 00:10:44.876 "supported_io_types": { 00:10:44.876 "read": true, 00:10:44.876 "write": true, 00:10:44.876 "unmap": true, 00:10:44.876 "flush": true, 00:10:44.876 "reset": true, 00:10:44.876 "nvme_admin": false, 00:10:44.876 "nvme_io": false, 00:10:44.876 "nvme_io_md": false, 00:10:44.876 "write_zeroes": true, 00:10:44.876 "zcopy": true, 00:10:44.876 "get_zone_info": false, 00:10:44.876 "zone_management": false, 00:10:44.876 "zone_append": false, 00:10:44.876 "compare": false, 00:10:44.876 "compare_and_write": false, 00:10:44.876 "abort": true, 00:10:44.876 "seek_hole": false, 00:10:44.876 "seek_data": false, 00:10:44.876 "copy": true, 00:10:44.876 "nvme_iov_md": false 00:10:44.876 }, 00:10:44.876 "memory_domains": [ 00:10:44.876 { 00:10:44.876 "dma_device_id": "system", 00:10:44.876 "dma_device_type": 1 00:10:44.876 }, 00:10:44.876 { 00:10:44.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.876 "dma_device_type": 2 00:10:44.876 } 00:10:44.876 ], 00:10:44.876 "driver_specific": {} 00:10:44.876 } 00:10:44.876 ] 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.876 "name": "Existed_Raid", 00:10:44.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.876 "strip_size_kb": 0, 00:10:44.876 "state": "configuring", 00:10:44.876 "raid_level": "raid1", 00:10:44.876 "superblock": false, 00:10:44.876 "num_base_bdevs": 2, 00:10:44.876 "num_base_bdevs_discovered": 1, 00:10:44.876 "num_base_bdevs_operational": 2, 00:10:44.876 "base_bdevs_list": [ 00:10:44.876 { 00:10:44.876 "name": "BaseBdev1", 00:10:44.876 "uuid": "5ff38dac-4943-49bc-afdf-9617f3c6b209", 00:10:44.876 "is_configured": true, 00:10:44.876 "data_offset": 0, 00:10:44.876 "data_size": 65536 00:10:44.876 }, 00:10:44.876 { 00:10:44.876 "name": "BaseBdev2", 00:10:44.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.876 "is_configured": false, 00:10:44.876 "data_offset": 0, 00:10:44.876 "data_size": 0 00:10:44.876 } 00:10:44.876 ] 00:10:44.876 }' 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.876 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.438 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:45.696 [2024-07-15 10:18:10.296815] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.696 [2024-07-15 10:18:10.296849] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1294810 name Existed_Raid, state configuring 00:10:45.696 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:45.696 [2024-07-15 10:18:10.469278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.696 [2024-07-15 10:18:10.470336] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.696 [2024-07-15 10:18:10.470363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.953 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:45.954 "name": "Existed_Raid", 00:10:45.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.954 "strip_size_kb": 0, 00:10:45.954 "state": "configuring", 00:10:45.954 "raid_level": "raid1", 00:10:45.954 "superblock": false, 00:10:45.954 "num_base_bdevs": 2, 00:10:45.954 "num_base_bdevs_discovered": 1, 00:10:45.954 "num_base_bdevs_operational": 2, 00:10:45.954 "base_bdevs_list": [ 00:10:45.954 { 00:10:45.954 "name": "BaseBdev1", 00:10:45.954 "uuid": "5ff38dac-4943-49bc-afdf-9617f3c6b209", 00:10:45.954 "is_configured": true, 00:10:45.954 "data_offset": 0, 00:10:45.954 "data_size": 65536 00:10:45.954 }, 00:10:45.954 { 00:10:45.954 "name": "BaseBdev2", 00:10:45.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.954 "is_configured": false, 00:10:45.954 "data_offset": 0, 00:10:45.954 "data_size": 0 00:10:45.954 } 00:10:45.954 ] 00:10:45.954 }' 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:45.954 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.518 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.519 [2024-07-15 10:18:11.294089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.519 [2024-07-15 10:18:11.294117] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1295600 00:10:46.519 [2024-07-15 10:18:11.294124] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:46.519 [2024-07-15 10:18:11.294261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x128bd80 00:10:46.519 [2024-07-15 10:18:11.294350] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1295600 00:10:46.519 [2024-07-15 10:18:11.294357] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1295600 00:10:46.519 [2024-07-15 10:18:11.294476] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.519 BaseBdev2 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:46.776 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.033 [ 00:10:47.033 { 00:10:47.033 "name": "BaseBdev2", 00:10:47.033 "aliases": [ 00:10:47.033 "baaa9437-584d-45c3-a821-010a9636192e" 00:10:47.033 ], 00:10:47.033 "product_name": "Malloc disk", 00:10:47.033 "block_size": 512, 00:10:47.033 "num_blocks": 65536, 00:10:47.033 "uuid": "baaa9437-584d-45c3-a821-010a9636192e", 00:10:47.033 "assigned_rate_limits": { 00:10:47.033 "rw_ios_per_sec": 0, 00:10:47.033 "rw_mbytes_per_sec": 0, 00:10:47.033 "r_mbytes_per_sec": 0, 00:10:47.033 "w_mbytes_per_sec": 0 00:10:47.033 }, 00:10:47.033 "claimed": true, 00:10:47.033 "claim_type": "exclusive_write", 00:10:47.033 "zoned": false, 00:10:47.033 "supported_io_types": { 00:10:47.033 "read": true, 00:10:47.033 "write": true, 00:10:47.033 "unmap": true, 00:10:47.033 "flush": true, 00:10:47.033 "reset": true, 00:10:47.033 "nvme_admin": false, 00:10:47.033 "nvme_io": false, 00:10:47.033 "nvme_io_md": false, 00:10:47.033 "write_zeroes": true, 00:10:47.033 "zcopy": true, 00:10:47.033 "get_zone_info": false, 00:10:47.033 "zone_management": false, 00:10:47.033 "zone_append": false, 00:10:47.033 "compare": false, 00:10:47.033 "compare_and_write": false, 00:10:47.033 "abort": true, 00:10:47.033 "seek_hole": false, 00:10:47.033 "seek_data": false, 00:10:47.033 "copy": true, 00:10:47.033 "nvme_iov_md": false 00:10:47.033 }, 00:10:47.033 "memory_domains": [ 00:10:47.033 { 00:10:47.033 "dma_device_id": "system", 00:10:47.033 "dma_device_type": 1 00:10:47.033 }, 00:10:47.033 { 00:10:47.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.033 "dma_device_type": 2 00:10:47.033 } 00:10:47.033 ], 00:10:47.033 "driver_specific": {} 00:10:47.033 } 00:10:47.033 ] 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.033 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.291 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.291 "name": "Existed_Raid", 00:10:47.291 "uuid": "6d5ad69f-78e3-4462-a563-e9d934b7c2f6", 00:10:47.291 "strip_size_kb": 0, 00:10:47.291 "state": "online", 00:10:47.291 "raid_level": "raid1", 00:10:47.291 "superblock": false, 00:10:47.291 "num_base_bdevs": 2, 00:10:47.291 "num_base_bdevs_discovered": 2, 00:10:47.291 "num_base_bdevs_operational": 2, 00:10:47.291 "base_bdevs_list": [ 00:10:47.291 { 00:10:47.291 "name": "BaseBdev1", 00:10:47.291 "uuid": "5ff38dac-4943-49bc-afdf-9617f3c6b209", 00:10:47.291 "is_configured": true, 00:10:47.291 "data_offset": 0, 00:10:47.291 "data_size": 65536 00:10:47.291 }, 00:10:47.291 { 00:10:47.291 "name": "BaseBdev2", 00:10:47.291 "uuid": "baaa9437-584d-45c3-a821-010a9636192e", 00:10:47.291 "is_configured": true, 00:10:47.291 "data_offset": 0, 00:10:47.291 "data_size": 65536 00:10:47.291 } 00:10:47.291 ] 00:10:47.291 }' 00:10:47.291 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.291 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:47.548 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:47.549 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:47.805 [2024-07-15 10:18:12.473283] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.805 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:47.805 "name": "Existed_Raid", 00:10:47.805 "aliases": [ 00:10:47.805 "6d5ad69f-78e3-4462-a563-e9d934b7c2f6" 00:10:47.805 ], 00:10:47.805 "product_name": "Raid Volume", 00:10:47.805 "block_size": 512, 00:10:47.805 "num_blocks": 65536, 00:10:47.805 "uuid": "6d5ad69f-78e3-4462-a563-e9d934b7c2f6", 00:10:47.805 "assigned_rate_limits": { 00:10:47.805 "rw_ios_per_sec": 0, 00:10:47.805 "rw_mbytes_per_sec": 0, 00:10:47.805 "r_mbytes_per_sec": 0, 00:10:47.805 "w_mbytes_per_sec": 0 00:10:47.805 }, 00:10:47.805 "claimed": false, 00:10:47.805 "zoned": false, 00:10:47.805 "supported_io_types": { 00:10:47.805 "read": true, 00:10:47.805 "write": true, 00:10:47.805 "unmap": false, 00:10:47.805 "flush": false, 00:10:47.805 "reset": true, 00:10:47.805 "nvme_admin": false, 00:10:47.805 "nvme_io": false, 00:10:47.805 "nvme_io_md": false, 00:10:47.805 "write_zeroes": true, 00:10:47.805 "zcopy": false, 00:10:47.805 "get_zone_info": false, 00:10:47.805 "zone_management": false, 00:10:47.805 "zone_append": false, 00:10:47.805 "compare": false, 00:10:47.805 "compare_and_write": false, 00:10:47.805 "abort": false, 00:10:47.805 "seek_hole": false, 00:10:47.805 "seek_data": false, 00:10:47.805 "copy": false, 00:10:47.805 "nvme_iov_md": false 00:10:47.805 }, 00:10:47.805 "memory_domains": [ 00:10:47.805 { 00:10:47.805 "dma_device_id": "system", 00:10:47.805 "dma_device_type": 1 00:10:47.805 }, 00:10:47.805 { 00:10:47.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.805 "dma_device_type": 2 00:10:47.805 }, 00:10:47.805 { 00:10:47.805 "dma_device_id": "system", 00:10:47.805 "dma_device_type": 1 00:10:47.805 }, 00:10:47.805 { 00:10:47.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.805 "dma_device_type": 2 00:10:47.805 } 00:10:47.805 ], 00:10:47.805 "driver_specific": { 00:10:47.805 "raid": { 00:10:47.805 "uuid": "6d5ad69f-78e3-4462-a563-e9d934b7c2f6", 00:10:47.805 "strip_size_kb": 0, 00:10:47.805 "state": "online", 00:10:47.805 "raid_level": "raid1", 00:10:47.805 "superblock": false, 00:10:47.805 "num_base_bdevs": 2, 00:10:47.805 "num_base_bdevs_discovered": 2, 00:10:47.805 "num_base_bdevs_operational": 2, 00:10:47.805 "base_bdevs_list": [ 00:10:47.805 { 00:10:47.805 "name": "BaseBdev1", 00:10:47.805 "uuid": "5ff38dac-4943-49bc-afdf-9617f3c6b209", 00:10:47.805 "is_configured": true, 00:10:47.805 "data_offset": 0, 00:10:47.805 "data_size": 65536 00:10:47.805 }, 00:10:47.805 { 00:10:47.805 "name": "BaseBdev2", 00:10:47.805 "uuid": "baaa9437-584d-45c3-a821-010a9636192e", 00:10:47.805 "is_configured": true, 00:10:47.805 "data_offset": 0, 00:10:47.805 "data_size": 65536 00:10:47.805 } 00:10:47.805 ] 00:10:47.805 } 00:10:47.805 } 00:10:47.805 }' 00:10:47.805 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.806 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:47.806 BaseBdev2' 00:10:47.806 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:47.806 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:47.806 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.062 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.062 "name": "BaseBdev1", 00:10:48.062 "aliases": [ 00:10:48.062 "5ff38dac-4943-49bc-afdf-9617f3c6b209" 00:10:48.062 ], 00:10:48.062 "product_name": "Malloc disk", 00:10:48.062 "block_size": 512, 00:10:48.062 "num_blocks": 65536, 00:10:48.062 "uuid": "5ff38dac-4943-49bc-afdf-9617f3c6b209", 00:10:48.062 "assigned_rate_limits": { 00:10:48.062 "rw_ios_per_sec": 0, 00:10:48.062 "rw_mbytes_per_sec": 0, 00:10:48.062 "r_mbytes_per_sec": 0, 00:10:48.062 "w_mbytes_per_sec": 0 00:10:48.062 }, 00:10:48.062 "claimed": true, 00:10:48.062 "claim_type": "exclusive_write", 00:10:48.062 "zoned": false, 00:10:48.062 "supported_io_types": { 00:10:48.062 "read": true, 00:10:48.062 "write": true, 00:10:48.062 "unmap": true, 00:10:48.062 "flush": true, 00:10:48.062 "reset": true, 00:10:48.062 "nvme_admin": false, 00:10:48.062 "nvme_io": false, 00:10:48.062 "nvme_io_md": false, 00:10:48.062 "write_zeroes": true, 00:10:48.062 "zcopy": true, 00:10:48.062 "get_zone_info": false, 00:10:48.062 "zone_management": false, 00:10:48.062 "zone_append": false, 00:10:48.062 "compare": false, 00:10:48.062 "compare_and_write": false, 00:10:48.062 "abort": true, 00:10:48.062 "seek_hole": false, 00:10:48.062 "seek_data": false, 00:10:48.062 "copy": true, 00:10:48.062 "nvme_iov_md": false 00:10:48.062 }, 00:10:48.062 "memory_domains": [ 00:10:48.062 { 00:10:48.062 "dma_device_id": "system", 00:10:48.062 "dma_device_type": 1 00:10:48.062 }, 00:10:48.062 { 00:10:48.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.062 "dma_device_type": 2 00:10:48.062 } 00:10:48.062 ], 00:10:48.062 "driver_specific": {} 00:10:48.062 }' 00:10:48.062 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.062 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.062 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.062 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.062 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:48.319 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:48.576 "name": "BaseBdev2", 00:10:48.576 "aliases": [ 00:10:48.576 "baaa9437-584d-45c3-a821-010a9636192e" 00:10:48.576 ], 00:10:48.576 "product_name": "Malloc disk", 00:10:48.576 "block_size": 512, 00:10:48.576 "num_blocks": 65536, 00:10:48.576 "uuid": "baaa9437-584d-45c3-a821-010a9636192e", 00:10:48.576 "assigned_rate_limits": { 00:10:48.576 "rw_ios_per_sec": 0, 00:10:48.576 "rw_mbytes_per_sec": 0, 00:10:48.576 "r_mbytes_per_sec": 0, 00:10:48.576 "w_mbytes_per_sec": 0 00:10:48.576 }, 00:10:48.576 "claimed": true, 00:10:48.576 "claim_type": "exclusive_write", 00:10:48.576 "zoned": false, 00:10:48.576 "supported_io_types": { 00:10:48.576 "read": true, 00:10:48.576 "write": true, 00:10:48.576 "unmap": true, 00:10:48.576 "flush": true, 00:10:48.576 "reset": true, 00:10:48.576 "nvme_admin": false, 00:10:48.576 "nvme_io": false, 00:10:48.576 "nvme_io_md": false, 00:10:48.576 "write_zeroes": true, 00:10:48.576 "zcopy": true, 00:10:48.576 "get_zone_info": false, 00:10:48.576 "zone_management": false, 00:10:48.576 "zone_append": false, 00:10:48.576 "compare": false, 00:10:48.576 "compare_and_write": false, 00:10:48.576 "abort": true, 00:10:48.576 "seek_hole": false, 00:10:48.576 "seek_data": false, 00:10:48.576 "copy": true, 00:10:48.576 "nvme_iov_md": false 00:10:48.576 }, 00:10:48.576 "memory_domains": [ 00:10:48.576 { 00:10:48.576 "dma_device_id": "system", 00:10:48.576 "dma_device_type": 1 00:10:48.576 }, 00:10:48.576 { 00:10:48.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.576 "dma_device_type": 2 00:10:48.576 } 00:10:48.576 ], 00:10:48.576 "driver_specific": {} 00:10:48.576 }' 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.576 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:48.833 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:48.833 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.833 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:48.833 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:48.833 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:48.834 [2024-07-15 10:18:13.584013] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.834 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.091 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.091 "name": "Existed_Raid", 00:10:49.091 "uuid": "6d5ad69f-78e3-4462-a563-e9d934b7c2f6", 00:10:49.091 "strip_size_kb": 0, 00:10:49.091 "state": "online", 00:10:49.091 "raid_level": "raid1", 00:10:49.091 "superblock": false, 00:10:49.091 "num_base_bdevs": 2, 00:10:49.091 "num_base_bdevs_discovered": 1, 00:10:49.091 "num_base_bdevs_operational": 1, 00:10:49.091 "base_bdevs_list": [ 00:10:49.091 { 00:10:49.091 "name": null, 00:10:49.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.091 "is_configured": false, 00:10:49.091 "data_offset": 0, 00:10:49.091 "data_size": 65536 00:10:49.091 }, 00:10:49.091 { 00:10:49.091 "name": "BaseBdev2", 00:10:49.091 "uuid": "baaa9437-584d-45c3-a821-010a9636192e", 00:10:49.091 "is_configured": true, 00:10:49.091 "data_offset": 0, 00:10:49.091 "data_size": 65536 00:10:49.091 } 00:10:49.091 ] 00:10:49.091 }' 00:10:49.091 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.091 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.654 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:49.911 [2024-07-15 10:18:14.543339] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.911 [2024-07-15 10:18:14.543397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.911 [2024-07-15 10:18:14.553384] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.911 [2024-07-15 10:18:14.553425] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.911 [2024-07-15 10:18:14.553433] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1295600 name Existed_Raid, state offline 00:10:49.911 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:49.911 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:49.911 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.911 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1756551 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1756551 ']' 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1756551 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1756551 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1756551' 00:10:50.182 killing process with pid 1756551 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1756551 00:10:50.182 [2024-07-15 10:18:14.797639] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.182 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1756551 00:10:50.182 [2024-07-15 10:18:14.798422] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.445 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:50.445 00:10:50.445 real 0m8.023s 00:10:50.445 user 0m14.072s 00:10:50.445 sys 0m1.619s 00:10:50.445 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.445 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.445 ************************************ 00:10:50.445 END TEST raid_state_function_test 00:10:50.445 ************************************ 00:10:50.445 10:18:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:50.445 10:18:15 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:50.445 10:18:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:50.445 10:18:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.445 10:18:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.445 ************************************ 00:10:50.445 START TEST raid_state_function_test_sb 00:10:50.445 ************************************ 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:50.445 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1758203 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1758203' 00:10:50.446 Process raid pid: 1758203 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1758203 /var/tmp/spdk-raid.sock 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1758203 ']' 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:50.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.446 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.446 [2024-07-15 10:18:15.114237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:50.446 [2024-07-15 10:18:15.114283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:50.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:50.446 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:50.446 [2024-07-15 10:18:15.205613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.703 [2024-07-15 10:18:15.279961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.703 [2024-07-15 10:18:15.330484] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.703 [2024-07-15 10:18:15.330513] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.271 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.271 10:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:51.271 10:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:51.271 [2024-07-15 10:18:16.057511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.271 [2024-07-15 10:18:16.057543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.271 [2024-07-15 10:18:16.057549] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.271 [2024-07-15 10:18:16.057557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.529 "name": "Existed_Raid", 00:10:51.529 "uuid": "fb761b0c-28c8-422c-9719-8f4211beec28", 00:10:51.529 "strip_size_kb": 0, 00:10:51.529 "state": "configuring", 00:10:51.529 "raid_level": "raid1", 00:10:51.529 "superblock": true, 00:10:51.529 "num_base_bdevs": 2, 00:10:51.529 "num_base_bdevs_discovered": 0, 00:10:51.529 "num_base_bdevs_operational": 2, 00:10:51.529 "base_bdevs_list": [ 00:10:51.529 { 00:10:51.529 "name": "BaseBdev1", 00:10:51.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.529 "is_configured": false, 00:10:51.529 "data_offset": 0, 00:10:51.529 "data_size": 0 00:10:51.529 }, 00:10:51.529 { 00:10:51.529 "name": "BaseBdev2", 00:10:51.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.529 "is_configured": false, 00:10:51.529 "data_offset": 0, 00:10:51.529 "data_size": 0 00:10:51.529 } 00:10:51.529 ] 00:10:51.529 }' 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.529 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.093 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:52.093 [2024-07-15 10:18:16.875567] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.093 [2024-07-15 10:18:16.875588] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1534f20 name Existed_Raid, state configuring 00:10:52.351 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:52.351 [2024-07-15 10:18:17.048023] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:52.351 [2024-07-15 10:18:17.048041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:52.351 [2024-07-15 10:18:17.048047] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.351 [2024-07-15 10:18:17.048065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.351 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.608 [2024-07-15 10:18:17.220746] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.608 BaseBdev1 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:52.608 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.866 [ 00:10:52.866 { 00:10:52.866 "name": "BaseBdev1", 00:10:52.866 "aliases": [ 00:10:52.866 "d9f3d568-5c93-4600-910d-c0b1f2f553cb" 00:10:52.866 ], 00:10:52.866 "product_name": "Malloc disk", 00:10:52.866 "block_size": 512, 00:10:52.866 "num_blocks": 65536, 00:10:52.866 "uuid": "d9f3d568-5c93-4600-910d-c0b1f2f553cb", 00:10:52.866 "assigned_rate_limits": { 00:10:52.866 "rw_ios_per_sec": 0, 00:10:52.866 "rw_mbytes_per_sec": 0, 00:10:52.866 "r_mbytes_per_sec": 0, 00:10:52.866 "w_mbytes_per_sec": 0 00:10:52.866 }, 00:10:52.866 "claimed": true, 00:10:52.866 "claim_type": "exclusive_write", 00:10:52.866 "zoned": false, 00:10:52.866 "supported_io_types": { 00:10:52.866 "read": true, 00:10:52.866 "write": true, 00:10:52.866 "unmap": true, 00:10:52.866 "flush": true, 00:10:52.866 "reset": true, 00:10:52.866 "nvme_admin": false, 00:10:52.866 "nvme_io": false, 00:10:52.866 "nvme_io_md": false, 00:10:52.866 "write_zeroes": true, 00:10:52.866 "zcopy": true, 00:10:52.866 "get_zone_info": false, 00:10:52.866 "zone_management": false, 00:10:52.866 "zone_append": false, 00:10:52.866 "compare": false, 00:10:52.866 "compare_and_write": false, 00:10:52.866 "abort": true, 00:10:52.866 "seek_hole": false, 00:10:52.866 "seek_data": false, 00:10:52.866 "copy": true, 00:10:52.866 "nvme_iov_md": false 00:10:52.866 }, 00:10:52.866 "memory_domains": [ 00:10:52.866 { 00:10:52.866 "dma_device_id": "system", 00:10:52.866 "dma_device_type": 1 00:10:52.866 }, 00:10:52.866 { 00:10:52.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.866 "dma_device_type": 2 00:10:52.866 } 00:10:52.866 ], 00:10:52.866 "driver_specific": {} 00:10:52.866 } 00:10:52.866 ] 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.866 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.124 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.124 "name": "Existed_Raid", 00:10:53.124 "uuid": "44bcb0f7-bfb2-430d-9c3b-f3b79abf7b10", 00:10:53.124 "strip_size_kb": 0, 00:10:53.124 "state": "configuring", 00:10:53.124 "raid_level": "raid1", 00:10:53.124 "superblock": true, 00:10:53.124 "num_base_bdevs": 2, 00:10:53.124 "num_base_bdevs_discovered": 1, 00:10:53.124 "num_base_bdevs_operational": 2, 00:10:53.124 "base_bdevs_list": [ 00:10:53.124 { 00:10:53.124 "name": "BaseBdev1", 00:10:53.124 "uuid": "d9f3d568-5c93-4600-910d-c0b1f2f553cb", 00:10:53.124 "is_configured": true, 00:10:53.124 "data_offset": 2048, 00:10:53.124 "data_size": 63488 00:10:53.124 }, 00:10:53.124 { 00:10:53.124 "name": "BaseBdev2", 00:10:53.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.124 "is_configured": false, 00:10:53.124 "data_offset": 0, 00:10:53.124 "data_size": 0 00:10:53.124 } 00:10:53.124 ] 00:10:53.124 }' 00:10:53.124 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.124 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.687 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:53.687 [2024-07-15 10:18:18.379728] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.687 [2024-07-15 10:18:18.379756] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1534810 name Existed_Raid, state configuring 00:10:53.687 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:53.944 [2024-07-15 10:18:18.548183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.944 [2024-07-15 10:18:18.549237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.945 [2024-07-15 10:18:18.549262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.945 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.202 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.202 "name": "Existed_Raid", 00:10:54.202 "uuid": "1c07d8bc-8d08-4a4e-8c76-12b730c6669b", 00:10:54.202 "strip_size_kb": 0, 00:10:54.202 "state": "configuring", 00:10:54.202 "raid_level": "raid1", 00:10:54.202 "superblock": true, 00:10:54.202 "num_base_bdevs": 2, 00:10:54.202 "num_base_bdevs_discovered": 1, 00:10:54.202 "num_base_bdevs_operational": 2, 00:10:54.202 "base_bdevs_list": [ 00:10:54.202 { 00:10:54.202 "name": "BaseBdev1", 00:10:54.202 "uuid": "d9f3d568-5c93-4600-910d-c0b1f2f553cb", 00:10:54.202 "is_configured": true, 00:10:54.202 "data_offset": 2048, 00:10:54.202 "data_size": 63488 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "name": "BaseBdev2", 00:10:54.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.202 "is_configured": false, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 0 00:10:54.202 } 00:10:54.202 ] 00:10:54.202 }' 00:10:54.202 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.202 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.460 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.719 [2024-07-15 10:18:19.408964] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.719 [2024-07-15 10:18:19.409085] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1535600 00:10:54.719 [2024-07-15 10:18:19.409094] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:54.719 [2024-07-15 10:18:19.409214] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15369c0 00:10:54.719 [2024-07-15 10:18:19.409298] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1535600 00:10:54.719 [2024-07-15 10:18:19.409305] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1535600 00:10:54.719 [2024-07-15 10:18:19.409369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.719 BaseBdev2 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:54.719 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:54.978 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.978 [ 00:10:54.978 { 00:10:54.978 "name": "BaseBdev2", 00:10:54.978 "aliases": [ 00:10:54.978 "6eb67ffc-fd54-4333-890c-ccfe576f4b7e" 00:10:54.978 ], 00:10:54.978 "product_name": "Malloc disk", 00:10:54.978 "block_size": 512, 00:10:54.978 "num_blocks": 65536, 00:10:54.978 "uuid": "6eb67ffc-fd54-4333-890c-ccfe576f4b7e", 00:10:54.978 "assigned_rate_limits": { 00:10:54.978 "rw_ios_per_sec": 0, 00:10:54.978 "rw_mbytes_per_sec": 0, 00:10:54.978 "r_mbytes_per_sec": 0, 00:10:54.978 "w_mbytes_per_sec": 0 00:10:54.978 }, 00:10:54.978 "claimed": true, 00:10:54.978 "claim_type": "exclusive_write", 00:10:54.978 "zoned": false, 00:10:54.978 "supported_io_types": { 00:10:54.978 "read": true, 00:10:54.978 "write": true, 00:10:54.978 "unmap": true, 00:10:54.978 "flush": true, 00:10:54.978 "reset": true, 00:10:54.978 "nvme_admin": false, 00:10:54.978 "nvme_io": false, 00:10:54.978 "nvme_io_md": false, 00:10:54.978 "write_zeroes": true, 00:10:54.978 "zcopy": true, 00:10:54.978 "get_zone_info": false, 00:10:54.978 "zone_management": false, 00:10:54.978 "zone_append": false, 00:10:54.978 "compare": false, 00:10:54.978 "compare_and_write": false, 00:10:54.978 "abort": true, 00:10:54.978 "seek_hole": false, 00:10:54.978 "seek_data": false, 00:10:54.978 "copy": true, 00:10:54.978 "nvme_iov_md": false 00:10:54.978 }, 00:10:54.978 "memory_domains": [ 00:10:54.978 { 00:10:54.978 "dma_device_id": "system", 00:10:54.978 "dma_device_type": 1 00:10:54.979 }, 00:10:54.979 { 00:10:54.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.979 "dma_device_type": 2 00:10:54.979 } 00:10:54.979 ], 00:10:54.979 "driver_specific": {} 00:10:54.979 } 00:10:54.979 ] 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.979 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.236 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.236 "name": "Existed_Raid", 00:10:55.236 "uuid": "1c07d8bc-8d08-4a4e-8c76-12b730c6669b", 00:10:55.236 "strip_size_kb": 0, 00:10:55.236 "state": "online", 00:10:55.236 "raid_level": "raid1", 00:10:55.236 "superblock": true, 00:10:55.236 "num_base_bdevs": 2, 00:10:55.236 "num_base_bdevs_discovered": 2, 00:10:55.236 "num_base_bdevs_operational": 2, 00:10:55.236 "base_bdevs_list": [ 00:10:55.236 { 00:10:55.236 "name": "BaseBdev1", 00:10:55.236 "uuid": "d9f3d568-5c93-4600-910d-c0b1f2f553cb", 00:10:55.236 "is_configured": true, 00:10:55.236 "data_offset": 2048, 00:10:55.236 "data_size": 63488 00:10:55.236 }, 00:10:55.236 { 00:10:55.236 "name": "BaseBdev2", 00:10:55.236 "uuid": "6eb67ffc-fd54-4333-890c-ccfe576f4b7e", 00:10:55.236 "is_configured": true, 00:10:55.236 "data_offset": 2048, 00:10:55.236 "data_size": 63488 00:10:55.236 } 00:10:55.236 ] 00:10:55.236 }' 00:10:55.236 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.236 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:55.799 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:55.799 [2024-07-15 10:18:20.576148] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.057 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:56.057 "name": "Existed_Raid", 00:10:56.057 "aliases": [ 00:10:56.057 "1c07d8bc-8d08-4a4e-8c76-12b730c6669b" 00:10:56.057 ], 00:10:56.057 "product_name": "Raid Volume", 00:10:56.057 "block_size": 512, 00:10:56.057 "num_blocks": 63488, 00:10:56.057 "uuid": "1c07d8bc-8d08-4a4e-8c76-12b730c6669b", 00:10:56.057 "assigned_rate_limits": { 00:10:56.057 "rw_ios_per_sec": 0, 00:10:56.057 "rw_mbytes_per_sec": 0, 00:10:56.057 "r_mbytes_per_sec": 0, 00:10:56.057 "w_mbytes_per_sec": 0 00:10:56.057 }, 00:10:56.057 "claimed": false, 00:10:56.057 "zoned": false, 00:10:56.057 "supported_io_types": { 00:10:56.057 "read": true, 00:10:56.057 "write": true, 00:10:56.057 "unmap": false, 00:10:56.057 "flush": false, 00:10:56.057 "reset": true, 00:10:56.057 "nvme_admin": false, 00:10:56.057 "nvme_io": false, 00:10:56.057 "nvme_io_md": false, 00:10:56.057 "write_zeroes": true, 00:10:56.057 "zcopy": false, 00:10:56.057 "get_zone_info": false, 00:10:56.057 "zone_management": false, 00:10:56.057 "zone_append": false, 00:10:56.057 "compare": false, 00:10:56.057 "compare_and_write": false, 00:10:56.057 "abort": false, 00:10:56.057 "seek_hole": false, 00:10:56.057 "seek_data": false, 00:10:56.057 "copy": false, 00:10:56.057 "nvme_iov_md": false 00:10:56.057 }, 00:10:56.058 "memory_domains": [ 00:10:56.058 { 00:10:56.058 "dma_device_id": "system", 00:10:56.058 "dma_device_type": 1 00:10:56.058 }, 00:10:56.058 { 00:10:56.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.058 "dma_device_type": 2 00:10:56.058 }, 00:10:56.058 { 00:10:56.058 "dma_device_id": "system", 00:10:56.058 "dma_device_type": 1 00:10:56.058 }, 00:10:56.058 { 00:10:56.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.058 "dma_device_type": 2 00:10:56.058 } 00:10:56.058 ], 00:10:56.058 "driver_specific": { 00:10:56.058 "raid": { 00:10:56.058 "uuid": "1c07d8bc-8d08-4a4e-8c76-12b730c6669b", 00:10:56.058 "strip_size_kb": 0, 00:10:56.058 "state": "online", 00:10:56.058 "raid_level": "raid1", 00:10:56.058 "superblock": true, 00:10:56.058 "num_base_bdevs": 2, 00:10:56.058 "num_base_bdevs_discovered": 2, 00:10:56.058 "num_base_bdevs_operational": 2, 00:10:56.058 "base_bdevs_list": [ 00:10:56.058 { 00:10:56.058 "name": "BaseBdev1", 00:10:56.058 "uuid": "d9f3d568-5c93-4600-910d-c0b1f2f553cb", 00:10:56.058 "is_configured": true, 00:10:56.058 "data_offset": 2048, 00:10:56.058 "data_size": 63488 00:10:56.058 }, 00:10:56.058 { 00:10:56.058 "name": "BaseBdev2", 00:10:56.058 "uuid": "6eb67ffc-fd54-4333-890c-ccfe576f4b7e", 00:10:56.058 "is_configured": true, 00:10:56.058 "data_offset": 2048, 00:10:56.058 "data_size": 63488 00:10:56.058 } 00:10:56.058 ] 00:10:56.058 } 00:10:56.058 } 00:10:56.058 }' 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:56.058 BaseBdev2' 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:56.058 "name": "BaseBdev1", 00:10:56.058 "aliases": [ 00:10:56.058 "d9f3d568-5c93-4600-910d-c0b1f2f553cb" 00:10:56.058 ], 00:10:56.058 "product_name": "Malloc disk", 00:10:56.058 "block_size": 512, 00:10:56.058 "num_blocks": 65536, 00:10:56.058 "uuid": "d9f3d568-5c93-4600-910d-c0b1f2f553cb", 00:10:56.058 "assigned_rate_limits": { 00:10:56.058 "rw_ios_per_sec": 0, 00:10:56.058 "rw_mbytes_per_sec": 0, 00:10:56.058 "r_mbytes_per_sec": 0, 00:10:56.058 "w_mbytes_per_sec": 0 00:10:56.058 }, 00:10:56.058 "claimed": true, 00:10:56.058 "claim_type": "exclusive_write", 00:10:56.058 "zoned": false, 00:10:56.058 "supported_io_types": { 00:10:56.058 "read": true, 00:10:56.058 "write": true, 00:10:56.058 "unmap": true, 00:10:56.058 "flush": true, 00:10:56.058 "reset": true, 00:10:56.058 "nvme_admin": false, 00:10:56.058 "nvme_io": false, 00:10:56.058 "nvme_io_md": false, 00:10:56.058 "write_zeroes": true, 00:10:56.058 "zcopy": true, 00:10:56.058 "get_zone_info": false, 00:10:56.058 "zone_management": false, 00:10:56.058 "zone_append": false, 00:10:56.058 "compare": false, 00:10:56.058 "compare_and_write": false, 00:10:56.058 "abort": true, 00:10:56.058 "seek_hole": false, 00:10:56.058 "seek_data": false, 00:10:56.058 "copy": true, 00:10:56.058 "nvme_iov_md": false 00:10:56.058 }, 00:10:56.058 "memory_domains": [ 00:10:56.058 { 00:10:56.058 "dma_device_id": "system", 00:10:56.058 "dma_device_type": 1 00:10:56.058 }, 00:10:56.058 { 00:10:56.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.058 "dma_device_type": 2 00:10:56.058 } 00:10:56.058 ], 00:10:56.058 "driver_specific": {} 00:10:56.058 }' 00:10:56.058 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.316 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.316 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:56.316 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.316 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.316 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:56.316 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.316 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.316 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.316 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.316 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:56.574 "name": "BaseBdev2", 00:10:56.574 "aliases": [ 00:10:56.574 "6eb67ffc-fd54-4333-890c-ccfe576f4b7e" 00:10:56.574 ], 00:10:56.574 "product_name": "Malloc disk", 00:10:56.574 "block_size": 512, 00:10:56.574 "num_blocks": 65536, 00:10:56.574 "uuid": "6eb67ffc-fd54-4333-890c-ccfe576f4b7e", 00:10:56.574 "assigned_rate_limits": { 00:10:56.574 "rw_ios_per_sec": 0, 00:10:56.574 "rw_mbytes_per_sec": 0, 00:10:56.574 "r_mbytes_per_sec": 0, 00:10:56.574 "w_mbytes_per_sec": 0 00:10:56.574 }, 00:10:56.574 "claimed": true, 00:10:56.574 "claim_type": "exclusive_write", 00:10:56.574 "zoned": false, 00:10:56.574 "supported_io_types": { 00:10:56.574 "read": true, 00:10:56.574 "write": true, 00:10:56.574 "unmap": true, 00:10:56.574 "flush": true, 00:10:56.574 "reset": true, 00:10:56.574 "nvme_admin": false, 00:10:56.574 "nvme_io": false, 00:10:56.574 "nvme_io_md": false, 00:10:56.574 "write_zeroes": true, 00:10:56.574 "zcopy": true, 00:10:56.574 "get_zone_info": false, 00:10:56.574 "zone_management": false, 00:10:56.574 "zone_append": false, 00:10:56.574 "compare": false, 00:10:56.574 "compare_and_write": false, 00:10:56.574 "abort": true, 00:10:56.574 "seek_hole": false, 00:10:56.574 "seek_data": false, 00:10:56.574 "copy": true, 00:10:56.574 "nvme_iov_md": false 00:10:56.574 }, 00:10:56.574 "memory_domains": [ 00:10:56.574 { 00:10:56.574 "dma_device_id": "system", 00:10:56.574 "dma_device_type": 1 00:10:56.574 }, 00:10:56.574 { 00:10:56.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.574 "dma_device_type": 2 00:10:56.574 } 00:10:56.574 ], 00:10:56.574 "driver_specific": {} 00:10:56.574 }' 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.574 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.831 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:57.087 [2024-07-15 10:18:21.759046] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:10:57.087 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:57.088 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:57.088 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:57.088 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:57.088 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.088 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.345 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.345 "name": "Existed_Raid", 00:10:57.345 "uuid": "1c07d8bc-8d08-4a4e-8c76-12b730c6669b", 00:10:57.345 "strip_size_kb": 0, 00:10:57.345 "state": "online", 00:10:57.345 "raid_level": "raid1", 00:10:57.345 "superblock": true, 00:10:57.345 "num_base_bdevs": 2, 00:10:57.345 "num_base_bdevs_discovered": 1, 00:10:57.345 "num_base_bdevs_operational": 1, 00:10:57.345 "base_bdevs_list": [ 00:10:57.345 { 00:10:57.345 "name": null, 00:10:57.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.345 "is_configured": false, 00:10:57.345 "data_offset": 2048, 00:10:57.345 "data_size": 63488 00:10:57.345 }, 00:10:57.345 { 00:10:57.345 "name": "BaseBdev2", 00:10:57.345 "uuid": "6eb67ffc-fd54-4333-890c-ccfe576f4b7e", 00:10:57.345 "is_configured": true, 00:10:57.345 "data_offset": 2048, 00:10:57.345 "data_size": 63488 00:10:57.345 } 00:10:57.345 ] 00:10:57.345 }' 00:10:57.345 10:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.345 10:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.911 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:58.169 [2024-07-15 10:18:22.782531] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.169 [2024-07-15 10:18:22.782595] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.169 [2024-07-15 10:18:22.792498] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.169 [2024-07-15 10:18:22.792538] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.169 [2024-07-15 10:18:22.792550] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1535600 name Existed_Raid, state offline 00:10:58.169 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:58.169 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:58.169 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.169 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1758203 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1758203 ']' 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1758203 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.427 10:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1758203 00:10:58.427 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:58.427 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:58.427 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1758203' 00:10:58.427 killing process with pid 1758203 00:10:58.427 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1758203 00:10:58.427 [2024-07-15 10:18:23.035311] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.427 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1758203 00:10:58.427 [2024-07-15 10:18:23.036118] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.427 10:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:58.427 00:10:58.427 real 0m8.153s 00:10:58.427 user 0m14.325s 00:10:58.428 sys 0m1.635s 00:10:58.428 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.428 10:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.428 ************************************ 00:10:58.428 END TEST raid_state_function_test_sb 00:10:58.428 ************************************ 00:10:58.685 10:18:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:58.685 10:18:23 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:58.685 10:18:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:58.685 10:18:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.685 10:18:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.685 ************************************ 00:10:58.685 START TEST raid_superblock_test 00:10:58.685 ************************************ 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1759765 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1759765 /var/tmp/spdk-raid.sock 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1759765 ']' 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:58.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.685 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.685 [2024-07-15 10:18:23.345088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:58.685 [2024-07-15 10:18:23.345133] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759765 ] 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:58.685 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.685 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:58.686 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:58.686 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:58.686 [2024-07-15 10:18:23.436679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.943 [2024-07-15 10:18:23.510392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.943 [2024-07-15 10:18:23.564111] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.943 [2024-07-15 10:18:23.564138] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.509 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:59.509 malloc1 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.768 [2024-07-15 10:18:24.456086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.768 [2024-07-15 10:18:24.456123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.768 [2024-07-15 10:18:24.456137] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c5a2f0 00:10:59.768 [2024-07-15 10:18:24.456145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.768 [2024-07-15 10:18:24.457254] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.768 [2024-07-15 10:18:24.457278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.768 pt1 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.768 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:00.025 malloc2 00:11:00.025 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.025 [2024-07-15 10:18:24.800833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.025 [2024-07-15 10:18:24.800864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.025 [2024-07-15 10:18:24.800875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c5b6d0 00:11:00.025 [2024-07-15 10:18:24.800883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.026 [2024-07-15 10:18:24.801847] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.026 [2024-07-15 10:18:24.801868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.026 pt2 00:11:00.283 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:00.283 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:00.283 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:00.283 [2024-07-15 10:18:24.973285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.283 [2024-07-15 10:18:24.974121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.283 [2024-07-15 10:18:24.974216] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1df4310 00:11:00.283 [2024-07-15 10:18:24.974224] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:00.283 [2024-07-15 10:18:24.974353] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1df3ce0 00:11:00.283 [2024-07-15 10:18:24.974442] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1df4310 00:11:00.283 [2024-07-15 10:18:24.974449] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1df4310 00:11:00.283 [2024-07-15 10:18:24.974509] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.283 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.284 10:18:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.541 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.541 "name": "raid_bdev1", 00:11:00.541 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:00.541 "strip_size_kb": 0, 00:11:00.541 "state": "online", 00:11:00.541 "raid_level": "raid1", 00:11:00.541 "superblock": true, 00:11:00.541 "num_base_bdevs": 2, 00:11:00.541 "num_base_bdevs_discovered": 2, 00:11:00.541 "num_base_bdevs_operational": 2, 00:11:00.541 "base_bdevs_list": [ 00:11:00.541 { 00:11:00.541 "name": "pt1", 00:11:00.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.541 "is_configured": true, 00:11:00.541 "data_offset": 2048, 00:11:00.541 "data_size": 63488 00:11:00.541 }, 00:11:00.541 { 00:11:00.541 "name": "pt2", 00:11:00.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.541 "is_configured": true, 00:11:00.541 "data_offset": 2048, 00:11:00.541 "data_size": 63488 00:11:00.541 } 00:11:00.541 ] 00:11:00.541 }' 00:11:00.541 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.541 10:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:01.107 [2024-07-15 10:18:25.807580] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.107 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:01.107 "name": "raid_bdev1", 00:11:01.107 "aliases": [ 00:11:01.107 "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b" 00:11:01.107 ], 00:11:01.107 "product_name": "Raid Volume", 00:11:01.107 "block_size": 512, 00:11:01.107 "num_blocks": 63488, 00:11:01.107 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:01.108 "assigned_rate_limits": { 00:11:01.108 "rw_ios_per_sec": 0, 00:11:01.108 "rw_mbytes_per_sec": 0, 00:11:01.108 "r_mbytes_per_sec": 0, 00:11:01.108 "w_mbytes_per_sec": 0 00:11:01.108 }, 00:11:01.108 "claimed": false, 00:11:01.108 "zoned": false, 00:11:01.108 "supported_io_types": { 00:11:01.108 "read": true, 00:11:01.108 "write": true, 00:11:01.108 "unmap": false, 00:11:01.108 "flush": false, 00:11:01.108 "reset": true, 00:11:01.108 "nvme_admin": false, 00:11:01.108 "nvme_io": false, 00:11:01.108 "nvme_io_md": false, 00:11:01.108 "write_zeroes": true, 00:11:01.108 "zcopy": false, 00:11:01.108 "get_zone_info": false, 00:11:01.108 "zone_management": false, 00:11:01.108 "zone_append": false, 00:11:01.108 "compare": false, 00:11:01.108 "compare_and_write": false, 00:11:01.108 "abort": false, 00:11:01.108 "seek_hole": false, 00:11:01.108 "seek_data": false, 00:11:01.108 "copy": false, 00:11:01.108 "nvme_iov_md": false 00:11:01.108 }, 00:11:01.108 "memory_domains": [ 00:11:01.108 { 00:11:01.108 "dma_device_id": "system", 00:11:01.108 "dma_device_type": 1 00:11:01.108 }, 00:11:01.108 { 00:11:01.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.108 "dma_device_type": 2 00:11:01.108 }, 00:11:01.108 { 00:11:01.108 "dma_device_id": "system", 00:11:01.108 "dma_device_type": 1 00:11:01.108 }, 00:11:01.108 { 00:11:01.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.108 "dma_device_type": 2 00:11:01.108 } 00:11:01.108 ], 00:11:01.108 "driver_specific": { 00:11:01.108 "raid": { 00:11:01.108 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:01.108 "strip_size_kb": 0, 00:11:01.108 "state": "online", 00:11:01.108 "raid_level": "raid1", 00:11:01.108 "superblock": true, 00:11:01.108 "num_base_bdevs": 2, 00:11:01.108 "num_base_bdevs_discovered": 2, 00:11:01.108 "num_base_bdevs_operational": 2, 00:11:01.108 "base_bdevs_list": [ 00:11:01.108 { 00:11:01.108 "name": "pt1", 00:11:01.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.108 "is_configured": true, 00:11:01.108 "data_offset": 2048, 00:11:01.108 "data_size": 63488 00:11:01.108 }, 00:11:01.108 { 00:11:01.108 "name": "pt2", 00:11:01.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.108 "is_configured": true, 00:11:01.108 "data_offset": 2048, 00:11:01.108 "data_size": 63488 00:11:01.108 } 00:11:01.108 ] 00:11:01.108 } 00:11:01.108 } 00:11:01.108 }' 00:11:01.108 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.108 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:01.108 pt2' 00:11:01.108 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.108 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:01.108 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.368 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.368 "name": "pt1", 00:11:01.368 "aliases": [ 00:11:01.368 "00000000-0000-0000-0000-000000000001" 00:11:01.368 ], 00:11:01.368 "product_name": "passthru", 00:11:01.368 "block_size": 512, 00:11:01.368 "num_blocks": 65536, 00:11:01.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.368 "assigned_rate_limits": { 00:11:01.368 "rw_ios_per_sec": 0, 00:11:01.368 "rw_mbytes_per_sec": 0, 00:11:01.368 "r_mbytes_per_sec": 0, 00:11:01.368 "w_mbytes_per_sec": 0 00:11:01.368 }, 00:11:01.368 "claimed": true, 00:11:01.368 "claim_type": "exclusive_write", 00:11:01.368 "zoned": false, 00:11:01.368 "supported_io_types": { 00:11:01.368 "read": true, 00:11:01.368 "write": true, 00:11:01.368 "unmap": true, 00:11:01.368 "flush": true, 00:11:01.368 "reset": true, 00:11:01.368 "nvme_admin": false, 00:11:01.368 "nvme_io": false, 00:11:01.368 "nvme_io_md": false, 00:11:01.368 "write_zeroes": true, 00:11:01.368 "zcopy": true, 00:11:01.368 "get_zone_info": false, 00:11:01.368 "zone_management": false, 00:11:01.368 "zone_append": false, 00:11:01.368 "compare": false, 00:11:01.368 "compare_and_write": false, 00:11:01.368 "abort": true, 00:11:01.368 "seek_hole": false, 00:11:01.368 "seek_data": false, 00:11:01.368 "copy": true, 00:11:01.368 "nvme_iov_md": false 00:11:01.368 }, 00:11:01.368 "memory_domains": [ 00:11:01.368 { 00:11:01.368 "dma_device_id": "system", 00:11:01.368 "dma_device_type": 1 00:11:01.368 }, 00:11:01.368 { 00:11:01.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.368 "dma_device_type": 2 00:11:01.368 } 00:11:01.368 ], 00:11:01.368 "driver_specific": { 00:11:01.368 "passthru": { 00:11:01.368 "name": "pt1", 00:11:01.368 "base_bdev_name": "malloc1" 00:11:01.368 } 00:11:01.368 } 00:11:01.368 }' 00:11:01.368 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.368 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.368 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.368 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.368 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:01.626 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.884 "name": "pt2", 00:11:01.884 "aliases": [ 00:11:01.884 "00000000-0000-0000-0000-000000000002" 00:11:01.884 ], 00:11:01.884 "product_name": "passthru", 00:11:01.884 "block_size": 512, 00:11:01.884 "num_blocks": 65536, 00:11:01.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.884 "assigned_rate_limits": { 00:11:01.884 "rw_ios_per_sec": 0, 00:11:01.884 "rw_mbytes_per_sec": 0, 00:11:01.884 "r_mbytes_per_sec": 0, 00:11:01.884 "w_mbytes_per_sec": 0 00:11:01.884 }, 00:11:01.884 "claimed": true, 00:11:01.884 "claim_type": "exclusive_write", 00:11:01.884 "zoned": false, 00:11:01.884 "supported_io_types": { 00:11:01.884 "read": true, 00:11:01.884 "write": true, 00:11:01.884 "unmap": true, 00:11:01.884 "flush": true, 00:11:01.884 "reset": true, 00:11:01.884 "nvme_admin": false, 00:11:01.884 "nvme_io": false, 00:11:01.884 "nvme_io_md": false, 00:11:01.884 "write_zeroes": true, 00:11:01.884 "zcopy": true, 00:11:01.884 "get_zone_info": false, 00:11:01.884 "zone_management": false, 00:11:01.884 "zone_append": false, 00:11:01.884 "compare": false, 00:11:01.884 "compare_and_write": false, 00:11:01.884 "abort": true, 00:11:01.884 "seek_hole": false, 00:11:01.884 "seek_data": false, 00:11:01.884 "copy": true, 00:11:01.884 "nvme_iov_md": false 00:11:01.884 }, 00:11:01.884 "memory_domains": [ 00:11:01.884 { 00:11:01.884 "dma_device_id": "system", 00:11:01.884 "dma_device_type": 1 00:11:01.884 }, 00:11:01.884 { 00:11:01.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.884 "dma_device_type": 2 00:11:01.884 } 00:11:01.884 ], 00:11:01.884 "driver_specific": { 00:11:01.884 "passthru": { 00:11:01.884 "name": "pt2", 00:11:01.884 "base_bdev_name": "malloc2" 00:11:01.884 } 00:11:01.884 } 00:11:01.884 }' 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.884 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:02.142 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:02.401 [2024-07-15 10:18:26.950519] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.401 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b 00:11:02.401 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b ']' 00:11:02.401 10:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:02.401 [2024-07-15 10:18:27.126820] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.401 [2024-07-15 10:18:27.126834] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.401 [2024-07-15 10:18:27.126870] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.401 [2024-07-15 10:18:27.126911] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.401 [2024-07-15 10:18:27.126920] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1df4310 name raid_bdev1, state offline 00:11:02.401 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.401 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:02.659 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:02.659 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:02.659 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.659 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:02.917 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.917 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:02.917 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:02.917 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:03.175 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.176 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:03.176 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.176 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:03.176 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:03.176 10:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:11:03.434 [2024-07-15 10:18:27.997033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:03.434 [2024-07-15 10:18:27.997989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:03.434 [2024-07-15 10:18:27.998030] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:03.434 [2024-07-15 10:18:27.998058] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:03.434 [2024-07-15 10:18:27.998071] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.434 [2024-07-15 10:18:27.998077] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1dfd3f0 name raid_bdev1, state configuring 00:11:03.434 request: 00:11:03.434 { 00:11:03.434 "name": "raid_bdev1", 00:11:03.434 "raid_level": "raid1", 00:11:03.434 "base_bdevs": [ 00:11:03.434 "malloc1", 00:11:03.434 "malloc2" 00:11:03.434 ], 00:11:03.434 "superblock": false, 00:11:03.434 "method": "bdev_raid_create", 00:11:03.434 "req_id": 1 00:11:03.434 } 00:11:03.434 Got JSON-RPC error response 00:11:03.434 response: 00:11:03.434 { 00:11:03.434 "code": -17, 00:11:03.434 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:03.434 } 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:03.434 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.692 [2024-07-15 10:18:28.341900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.692 [2024-07-15 10:18:28.341935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.692 [2024-07-15 10:18:28.341947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dfdd70 00:11:03.692 [2024-07-15 10:18:28.341956] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.692 [2024-07-15 10:18:28.343076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.692 [2024-07-15 10:18:28.343097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.692 [2024-07-15 10:18:28.343142] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:03.692 [2024-07-15 10:18:28.343160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.692 pt1 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.692 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.964 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.964 "name": "raid_bdev1", 00:11:03.964 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:03.964 "strip_size_kb": 0, 00:11:03.964 "state": "configuring", 00:11:03.964 "raid_level": "raid1", 00:11:03.964 "superblock": true, 00:11:03.964 "num_base_bdevs": 2, 00:11:03.964 "num_base_bdevs_discovered": 1, 00:11:03.964 "num_base_bdevs_operational": 2, 00:11:03.964 "base_bdevs_list": [ 00:11:03.964 { 00:11:03.964 "name": "pt1", 00:11:03.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.964 "is_configured": true, 00:11:03.964 "data_offset": 2048, 00:11:03.964 "data_size": 63488 00:11:03.964 }, 00:11:03.964 { 00:11:03.964 "name": null, 00:11:03.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.964 "is_configured": false, 00:11:03.964 "data_offset": 2048, 00:11:03.964 "data_size": 63488 00:11:03.964 } 00:11:03.964 ] 00:11:03.964 }' 00:11:03.964 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.964 10:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.234 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:11:04.234 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:04.234 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:04.234 10:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.493 [2024-07-15 10:18:29.151985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.493 [2024-07-15 10:18:29.152013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.493 [2024-07-15 10:18:29.152024] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df4bb0 00:11:04.493 [2024-07-15 10:18:29.152032] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.493 [2024-07-15 10:18:29.152261] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.493 [2024-07-15 10:18:29.152274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.493 [2024-07-15 10:18:29.152313] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.493 [2024-07-15 10:18:29.152326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.493 [2024-07-15 10:18:29.152390] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1df2de0 00:11:04.493 [2024-07-15 10:18:29.152398] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.493 [2024-07-15 10:18:29.152503] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c53eb0 00:11:04.493 [2024-07-15 10:18:29.152588] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1df2de0 00:11:04.493 [2024-07-15 10:18:29.152595] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1df2de0 00:11:04.493 [2024-07-15 10:18:29.152657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.493 pt2 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.493 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.751 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.751 "name": "raid_bdev1", 00:11:04.751 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:04.751 "strip_size_kb": 0, 00:11:04.751 "state": "online", 00:11:04.751 "raid_level": "raid1", 00:11:04.751 "superblock": true, 00:11:04.751 "num_base_bdevs": 2, 00:11:04.751 "num_base_bdevs_discovered": 2, 00:11:04.751 "num_base_bdevs_operational": 2, 00:11:04.751 "base_bdevs_list": [ 00:11:04.751 { 00:11:04.751 "name": "pt1", 00:11:04.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.751 "is_configured": true, 00:11:04.751 "data_offset": 2048, 00:11:04.751 "data_size": 63488 00:11:04.751 }, 00:11:04.751 { 00:11:04.751 "name": "pt2", 00:11:04.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.751 "is_configured": true, 00:11:04.751 "data_offset": 2048, 00:11:04.751 "data_size": 63488 00:11:04.751 } 00:11:04.751 ] 00:11:04.751 }' 00:11:04.751 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.751 10:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:05.315 10:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:05.315 [2024-07-15 10:18:29.982272] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.315 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:05.315 "name": "raid_bdev1", 00:11:05.315 "aliases": [ 00:11:05.315 "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b" 00:11:05.315 ], 00:11:05.315 "product_name": "Raid Volume", 00:11:05.315 "block_size": 512, 00:11:05.315 "num_blocks": 63488, 00:11:05.315 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:05.315 "assigned_rate_limits": { 00:11:05.315 "rw_ios_per_sec": 0, 00:11:05.315 "rw_mbytes_per_sec": 0, 00:11:05.315 "r_mbytes_per_sec": 0, 00:11:05.315 "w_mbytes_per_sec": 0 00:11:05.315 }, 00:11:05.315 "claimed": false, 00:11:05.315 "zoned": false, 00:11:05.315 "supported_io_types": { 00:11:05.315 "read": true, 00:11:05.315 "write": true, 00:11:05.315 "unmap": false, 00:11:05.315 "flush": false, 00:11:05.315 "reset": true, 00:11:05.315 "nvme_admin": false, 00:11:05.315 "nvme_io": false, 00:11:05.315 "nvme_io_md": false, 00:11:05.315 "write_zeroes": true, 00:11:05.315 "zcopy": false, 00:11:05.315 "get_zone_info": false, 00:11:05.315 "zone_management": false, 00:11:05.315 "zone_append": false, 00:11:05.315 "compare": false, 00:11:05.315 "compare_and_write": false, 00:11:05.315 "abort": false, 00:11:05.315 "seek_hole": false, 00:11:05.315 "seek_data": false, 00:11:05.315 "copy": false, 00:11:05.315 "nvme_iov_md": false 00:11:05.315 }, 00:11:05.315 "memory_domains": [ 00:11:05.315 { 00:11:05.315 "dma_device_id": "system", 00:11:05.315 "dma_device_type": 1 00:11:05.315 }, 00:11:05.315 { 00:11:05.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.315 "dma_device_type": 2 00:11:05.315 }, 00:11:05.315 { 00:11:05.315 "dma_device_id": "system", 00:11:05.315 "dma_device_type": 1 00:11:05.315 }, 00:11:05.315 { 00:11:05.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.315 "dma_device_type": 2 00:11:05.315 } 00:11:05.315 ], 00:11:05.315 "driver_specific": { 00:11:05.315 "raid": { 00:11:05.315 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:05.315 "strip_size_kb": 0, 00:11:05.315 "state": "online", 00:11:05.315 "raid_level": "raid1", 00:11:05.315 "superblock": true, 00:11:05.315 "num_base_bdevs": 2, 00:11:05.315 "num_base_bdevs_discovered": 2, 00:11:05.315 "num_base_bdevs_operational": 2, 00:11:05.315 "base_bdevs_list": [ 00:11:05.315 { 00:11:05.315 "name": "pt1", 00:11:05.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.315 "is_configured": true, 00:11:05.315 "data_offset": 2048, 00:11:05.315 "data_size": 63488 00:11:05.315 }, 00:11:05.315 { 00:11:05.315 "name": "pt2", 00:11:05.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.315 "is_configured": true, 00:11:05.315 "data_offset": 2048, 00:11:05.315 "data_size": 63488 00:11:05.315 } 00:11:05.315 ] 00:11:05.315 } 00:11:05.315 } 00:11:05.315 }' 00:11:05.315 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.315 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:05.315 pt2' 00:11:05.315 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:05.315 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:05.315 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:05.572 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:05.572 "name": "pt1", 00:11:05.572 "aliases": [ 00:11:05.572 "00000000-0000-0000-0000-000000000001" 00:11:05.572 ], 00:11:05.572 "product_name": "passthru", 00:11:05.572 "block_size": 512, 00:11:05.572 "num_blocks": 65536, 00:11:05.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.572 "assigned_rate_limits": { 00:11:05.572 "rw_ios_per_sec": 0, 00:11:05.572 "rw_mbytes_per_sec": 0, 00:11:05.572 "r_mbytes_per_sec": 0, 00:11:05.572 "w_mbytes_per_sec": 0 00:11:05.572 }, 00:11:05.572 "claimed": true, 00:11:05.572 "claim_type": "exclusive_write", 00:11:05.572 "zoned": false, 00:11:05.572 "supported_io_types": { 00:11:05.572 "read": true, 00:11:05.572 "write": true, 00:11:05.572 "unmap": true, 00:11:05.572 "flush": true, 00:11:05.572 "reset": true, 00:11:05.572 "nvme_admin": false, 00:11:05.572 "nvme_io": false, 00:11:05.572 "nvme_io_md": false, 00:11:05.572 "write_zeroes": true, 00:11:05.572 "zcopy": true, 00:11:05.572 "get_zone_info": false, 00:11:05.572 "zone_management": false, 00:11:05.572 "zone_append": false, 00:11:05.572 "compare": false, 00:11:05.572 "compare_and_write": false, 00:11:05.572 "abort": true, 00:11:05.572 "seek_hole": false, 00:11:05.572 "seek_data": false, 00:11:05.572 "copy": true, 00:11:05.572 "nvme_iov_md": false 00:11:05.572 }, 00:11:05.572 "memory_domains": [ 00:11:05.572 { 00:11:05.572 "dma_device_id": "system", 00:11:05.572 "dma_device_type": 1 00:11:05.572 }, 00:11:05.572 { 00:11:05.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.572 "dma_device_type": 2 00:11:05.572 } 00:11:05.572 ], 00:11:05.572 "driver_specific": { 00:11:05.573 "passthru": { 00:11:05.573 "name": "pt1", 00:11:05.573 "base_bdev_name": "malloc1" 00:11:05.573 } 00:11:05.573 } 00:11:05.573 }' 00:11:05.573 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:05.573 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:05.573 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:05.573 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:05.573 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:05.829 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:06.085 "name": "pt2", 00:11:06.085 "aliases": [ 00:11:06.085 "00000000-0000-0000-0000-000000000002" 00:11:06.085 ], 00:11:06.085 "product_name": "passthru", 00:11:06.085 "block_size": 512, 00:11:06.085 "num_blocks": 65536, 00:11:06.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.085 "assigned_rate_limits": { 00:11:06.085 "rw_ios_per_sec": 0, 00:11:06.085 "rw_mbytes_per_sec": 0, 00:11:06.085 "r_mbytes_per_sec": 0, 00:11:06.085 "w_mbytes_per_sec": 0 00:11:06.085 }, 00:11:06.085 "claimed": true, 00:11:06.085 "claim_type": "exclusive_write", 00:11:06.085 "zoned": false, 00:11:06.085 "supported_io_types": { 00:11:06.085 "read": true, 00:11:06.085 "write": true, 00:11:06.085 "unmap": true, 00:11:06.085 "flush": true, 00:11:06.085 "reset": true, 00:11:06.085 "nvme_admin": false, 00:11:06.085 "nvme_io": false, 00:11:06.085 "nvme_io_md": false, 00:11:06.085 "write_zeroes": true, 00:11:06.085 "zcopy": true, 00:11:06.085 "get_zone_info": false, 00:11:06.085 "zone_management": false, 00:11:06.085 "zone_append": false, 00:11:06.085 "compare": false, 00:11:06.085 "compare_and_write": false, 00:11:06.085 "abort": true, 00:11:06.085 "seek_hole": false, 00:11:06.085 "seek_data": false, 00:11:06.085 "copy": true, 00:11:06.085 "nvme_iov_md": false 00:11:06.085 }, 00:11:06.085 "memory_domains": [ 00:11:06.085 { 00:11:06.085 "dma_device_id": "system", 00:11:06.085 "dma_device_type": 1 00:11:06.085 }, 00:11:06.085 { 00:11:06.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.085 "dma_device_type": 2 00:11:06.085 } 00:11:06.085 ], 00:11:06.085 "driver_specific": { 00:11:06.085 "passthru": { 00:11:06.085 "name": "pt2", 00:11:06.085 "base_bdev_name": "malloc2" 00:11:06.085 } 00:11:06.085 } 00:11:06.085 }' 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:06.085 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:06.343 10:18:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:06.601 [2024-07-15 10:18:31.137271] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b '!=' cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b ']' 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:06.601 [2024-07-15 10:18:31.309598] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.601 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.859 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.859 "name": "raid_bdev1", 00:11:06.859 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:06.859 "strip_size_kb": 0, 00:11:06.859 "state": "online", 00:11:06.859 "raid_level": "raid1", 00:11:06.859 "superblock": true, 00:11:06.859 "num_base_bdevs": 2, 00:11:06.859 "num_base_bdevs_discovered": 1, 00:11:06.859 "num_base_bdevs_operational": 1, 00:11:06.859 "base_bdevs_list": [ 00:11:06.859 { 00:11:06.859 "name": null, 00:11:06.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.859 "is_configured": false, 00:11:06.859 "data_offset": 2048, 00:11:06.859 "data_size": 63488 00:11:06.859 }, 00:11:06.859 { 00:11:06.859 "name": "pt2", 00:11:06.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.859 "is_configured": true, 00:11:06.859 "data_offset": 2048, 00:11:06.859 "data_size": 63488 00:11:06.859 } 00:11:06.859 ] 00:11:06.859 }' 00:11:06.859 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.859 10:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.424 10:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:07.424 [2024-07-15 10:18:32.147734] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.424 [2024-07-15 10:18:32.147754] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.424 [2024-07-15 10:18:32.147790] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.424 [2024-07-15 10:18:32.147818] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.425 [2024-07-15 10:18:32.147829] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1df2de0 name raid_bdev1, state offline 00:11:07.425 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.425 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:11:07.682 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:11:07.682 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:11:07.682 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:11:07.682 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:07.682 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:07.939 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:11:07.939 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:11:07.939 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:11:07.939 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:11:07.939 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:11:07.939 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:07.939 [2024-07-15 10:18:32.632973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:07.939 [2024-07-15 10:18:32.633011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.940 [2024-07-15 10:18:32.633023] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df1f90 00:11:07.940 [2024-07-15 10:18:32.633031] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.940 [2024-07-15 10:18:32.634168] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.940 [2024-07-15 10:18:32.634192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:07.940 [2024-07-15 10:18:32.634239] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:07.940 [2024-07-15 10:18:32.634255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.940 [2024-07-15 10:18:32.634312] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c52b40 00:11:07.940 [2024-07-15 10:18:32.634319] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.940 [2024-07-15 10:18:32.634427] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1dfe810 00:11:07.940 [2024-07-15 10:18:32.634503] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c52b40 00:11:07.940 [2024-07-15 10:18:32.634509] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c52b40 00:11:07.940 [2024-07-15 10:18:32.634572] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.940 pt2 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.940 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.198 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.198 "name": "raid_bdev1", 00:11:08.198 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:08.198 "strip_size_kb": 0, 00:11:08.198 "state": "online", 00:11:08.198 "raid_level": "raid1", 00:11:08.198 "superblock": true, 00:11:08.198 "num_base_bdevs": 2, 00:11:08.198 "num_base_bdevs_discovered": 1, 00:11:08.198 "num_base_bdevs_operational": 1, 00:11:08.198 "base_bdevs_list": [ 00:11:08.198 { 00:11:08.198 "name": null, 00:11:08.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.198 "is_configured": false, 00:11:08.198 "data_offset": 2048, 00:11:08.198 "data_size": 63488 00:11:08.198 }, 00:11:08.198 { 00:11:08.198 "name": "pt2", 00:11:08.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.198 "is_configured": true, 00:11:08.198 "data_offset": 2048, 00:11:08.198 "data_size": 63488 00:11:08.198 } 00:11:08.198 ] 00:11:08.198 }' 00:11:08.198 10:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.198 10:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.762 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:08.762 [2024-07-15 10:18:33.459110] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.762 [2024-07-15 10:18:33.459129] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.762 [2024-07-15 10:18:33.459168] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.762 [2024-07-15 10:18:33.459198] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.762 [2024-07-15 10:18:33.459206] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c52b40 name raid_bdev1, state offline 00:11:08.762 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.762 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:11:09.019 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:11:09.019 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:11:09.019 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:11:09.019 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.019 [2024-07-15 10:18:33.795960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.019 [2024-07-15 10:18:33.795999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.019 [2024-07-15 10:18:33.796010] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1df48d0 00:11:09.019 [2024-07-15 10:18:33.796019] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.019 [2024-07-15 10:18:33.797159] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.019 [2024-07-15 10:18:33.797182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.019 [2024-07-15 10:18:33.797229] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:09.019 [2024-07-15 10:18:33.797247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.019 [2024-07-15 10:18:33.797314] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:09.019 [2024-07-15 10:18:33.797323] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.019 [2024-07-15 10:18:33.797331] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c53690 name raid_bdev1, state configuring 00:11:09.019 [2024-07-15 10:18:33.797351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.019 [2024-07-15 10:18:33.797388] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c521e0 00:11:09.019 [2024-07-15 10:18:33.797394] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.019 [2024-07-15 10:18:33.797502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c5a990 00:11:09.019 [2024-07-15 10:18:33.797580] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c521e0 00:11:09.019 [2024-07-15 10:18:33.797587] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c521e0 00:11:09.019 [2024-07-15 10:18:33.797649] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.019 pt1 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.277 10:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.277 "name": "raid_bdev1", 00:11:09.277 "uuid": "cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b", 00:11:09.277 "strip_size_kb": 0, 00:11:09.277 "state": "online", 00:11:09.277 "raid_level": "raid1", 00:11:09.277 "superblock": true, 00:11:09.277 "num_base_bdevs": 2, 00:11:09.277 "num_base_bdevs_discovered": 1, 00:11:09.277 "num_base_bdevs_operational": 1, 00:11:09.277 "base_bdevs_list": [ 00:11:09.277 { 00:11:09.277 "name": null, 00:11:09.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.277 "is_configured": false, 00:11:09.277 "data_offset": 2048, 00:11:09.277 "data_size": 63488 00:11:09.277 }, 00:11:09.277 { 00:11:09.277 "name": "pt2", 00:11:09.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.277 "is_configured": true, 00:11:09.277 "data_offset": 2048, 00:11:09.277 "data_size": 63488 00:11:09.277 } 00:11:09.277 ] 00:11:09.277 }' 00:11:09.277 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.277 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.841 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:09.841 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:11:10.099 [2024-07-15 10:18:34.818730] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b '!=' cc0d17ab-ce92-48c0-a2ea-d35b5ec0267b ']' 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1759765 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1759765 ']' 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1759765 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1759765 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1759765' 00:11:10.099 killing process with pid 1759765 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1759765 00:11:10.099 [2024-07-15 10:18:34.886766] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.099 [2024-07-15 10:18:34.886809] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.099 [2024-07-15 10:18:34.886840] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.099 [2024-07-15 10:18:34.886848] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c521e0 name raid_bdev1, state offline 00:11:10.099 10:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1759765 00:11:10.356 [2024-07-15 10:18:34.902050] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.356 10:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:10.356 00:11:10.356 real 0m11.785s 00:11:10.356 user 0m21.271s 00:11:10.356 sys 0m2.281s 00:11:10.356 10:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.356 10:18:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.356 ************************************ 00:11:10.356 END TEST raid_superblock_test 00:11:10.356 ************************************ 00:11:10.356 10:18:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:10.356 10:18:35 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:10.356 10:18:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:10.356 10:18:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.356 10:18:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.614 ************************************ 00:11:10.614 START TEST raid_read_error_test 00:11:10.614 ************************************ 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:10.614 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.edsvZj5lw5 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1762201 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1762201 /var/tmp/spdk-raid.sock 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1762201 ']' 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:10.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.615 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.615 [2024-07-15 10:18:35.214677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:10.615 [2024-07-15 10:18:35.214719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762201 ] 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:10.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.615 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:10.615 [2024-07-15 10:18:35.305784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.615 [2024-07-15 10:18:35.379947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.872 [2024-07-15 10:18:35.437662] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.872 [2024-07-15 10:18:35.437685] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.436 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.436 10:18:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:11.436 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:11.436 10:18:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:11.436 BaseBdev1_malloc 00:11:11.436 10:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:11.693 true 00:11:11.693 10:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:11.969 [2024-07-15 10:18:36.490104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:11.969 [2024-07-15 10:18:36.490141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.969 [2024-07-15 10:18:36.490163] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13df190 00:11:11.969 [2024-07-15 10:18:36.490174] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.969 [2024-07-15 10:18:36.491474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.969 [2024-07-15 10:18:36.491499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:11.969 BaseBdev1 00:11:11.969 10:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:11.969 10:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:11.969 BaseBdev2_malloc 00:11:11.969 10:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:12.226 true 00:11:12.226 10:18:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.226 [2024-07-15 10:18:36.990947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.226 [2024-07-15 10:18:36.990980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.226 [2024-07-15 10:18:36.990999] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13e3e20 00:11:12.226 [2024-07-15 10:18:36.991010] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.226 [2024-07-15 10:18:36.992041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.226 [2024-07-15 10:18:36.992065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.226 BaseBdev2 00:11:12.226 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:12.482 [2024-07-15 10:18:37.147365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.482 [2024-07-15 10:18:37.148207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.482 [2024-07-15 10:18:37.148336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13e5a50 00:11:12.482 [2024-07-15 10:18:37.148344] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.482 [2024-07-15 10:18:37.148473] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x123a140 00:11:12.482 [2024-07-15 10:18:37.148579] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13e5a50 00:11:12.482 [2024-07-15 10:18:37.148586] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13e5a50 00:11:12.482 [2024-07-15 10:18:37.148654] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.482 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.739 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.739 "name": "raid_bdev1", 00:11:12.739 "uuid": "5433124b-f9ca-44dc-a3b4-6592c06abc53", 00:11:12.739 "strip_size_kb": 0, 00:11:12.739 "state": "online", 00:11:12.739 "raid_level": "raid1", 00:11:12.739 "superblock": true, 00:11:12.739 "num_base_bdevs": 2, 00:11:12.739 "num_base_bdevs_discovered": 2, 00:11:12.739 "num_base_bdevs_operational": 2, 00:11:12.739 "base_bdevs_list": [ 00:11:12.739 { 00:11:12.739 "name": "BaseBdev1", 00:11:12.739 "uuid": "940ecf84-7143-56ec-88f5-049ca440e33d", 00:11:12.739 "is_configured": true, 00:11:12.739 "data_offset": 2048, 00:11:12.739 "data_size": 63488 00:11:12.739 }, 00:11:12.739 { 00:11:12.739 "name": "BaseBdev2", 00:11:12.739 "uuid": "4f4d85f3-cfbf-5fcf-8a84-3af34140a956", 00:11:12.739 "is_configured": true, 00:11:12.739 "data_offset": 2048, 00:11:12.739 "data_size": 63488 00:11:12.739 } 00:11:12.739 ] 00:11:12.739 }' 00:11:12.739 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.739 10:18:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.302 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:13.302 10:18:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:13.302 [2024-07-15 10:18:37.901519] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13e09d0 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.231 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.232 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.232 10:18:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.488 10:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.488 "name": "raid_bdev1", 00:11:14.488 "uuid": "5433124b-f9ca-44dc-a3b4-6592c06abc53", 00:11:14.488 "strip_size_kb": 0, 00:11:14.488 "state": "online", 00:11:14.488 "raid_level": "raid1", 00:11:14.488 "superblock": true, 00:11:14.488 "num_base_bdevs": 2, 00:11:14.488 "num_base_bdevs_discovered": 2, 00:11:14.488 "num_base_bdevs_operational": 2, 00:11:14.488 "base_bdevs_list": [ 00:11:14.488 { 00:11:14.488 "name": "BaseBdev1", 00:11:14.488 "uuid": "940ecf84-7143-56ec-88f5-049ca440e33d", 00:11:14.488 "is_configured": true, 00:11:14.488 "data_offset": 2048, 00:11:14.488 "data_size": 63488 00:11:14.488 }, 00:11:14.488 { 00:11:14.488 "name": "BaseBdev2", 00:11:14.488 "uuid": "4f4d85f3-cfbf-5fcf-8a84-3af34140a956", 00:11:14.488 "is_configured": true, 00:11:14.488 "data_offset": 2048, 00:11:14.488 "data_size": 63488 00:11:14.488 } 00:11:14.488 ] 00:11:14.488 }' 00:11:14.488 10:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.488 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:15.050 [2024-07-15 10:18:39.808550] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.050 [2024-07-15 10:18:39.808579] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.050 [2024-07-15 10:18:39.810580] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.050 [2024-07-15 10:18:39.810604] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.050 [2024-07-15 10:18:39.810656] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.050 [2024-07-15 10:18:39.810663] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13e5a50 name raid_bdev1, state offline 00:11:15.050 0 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1762201 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1762201 ']' 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1762201 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.050 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1762201 00:11:15.307 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.307 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.307 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1762201' 00:11:15.307 killing process with pid 1762201 00:11:15.307 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1762201 00:11:15.307 [2024-07-15 10:18:39.877272] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.307 10:18:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1762201 00:11:15.307 [2024-07-15 10:18:39.886951] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.edsvZj5lw5 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:15.307 00:11:15.307 real 0m4.922s 00:11:15.307 user 0m7.382s 00:11:15.307 sys 0m0.878s 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.307 10:18:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.307 ************************************ 00:11:15.307 END TEST raid_read_error_test 00:11:15.307 ************************************ 00:11:15.565 10:18:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:15.565 10:18:40 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:15.565 10:18:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:15.565 10:18:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.565 10:18:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.565 ************************************ 00:11:15.565 START TEST raid_write_error_test 00:11:15.565 ************************************ 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.AUmTcgJDf7 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1763099 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1763099 /var/tmp/spdk-raid.sock 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1763099 ']' 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:15.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.565 10:18:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.565 [2024-07-15 10:18:40.218111] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:15.565 [2024-07-15 10:18:40.218153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763099 ] 00:11:15.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.565 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:15.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.565 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:15.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.565 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:15.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:15.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.566 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:15.566 [2024-07-15 10:18:40.310671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.823 [2024-07-15 10:18:40.384894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.823 [2024-07-15 10:18:40.438947] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.823 [2024-07-15 10:18:40.438974] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.386 10:18:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.386 10:18:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:16.386 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:16.386 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:16.386 BaseBdev1_malloc 00:11:16.386 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:16.642 true 00:11:16.642 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:16.927 [2024-07-15 10:18:41.467331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:16.927 [2024-07-15 10:18:41.467365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.927 [2024-07-15 10:18:41.467386] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139d190 00:11:16.927 [2024-07-15 10:18:41.467399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.927 [2024-07-15 10:18:41.468737] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.927 [2024-07-15 10:18:41.468763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:16.927 BaseBdev1 00:11:16.927 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:16.927 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:16.927 BaseBdev2_malloc 00:11:16.927 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:17.184 true 00:11:17.184 10:18:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:17.440 [2024-07-15 10:18:41.992155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:17.440 [2024-07-15 10:18:41.992186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.440 [2024-07-15 10:18:41.992204] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13a1e20 00:11:17.440 [2024-07-15 10:18:41.992214] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.440 [2024-07-15 10:18:41.993175] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.440 [2024-07-15 10:18:41.993197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.440 BaseBdev2 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:11:17.440 [2024-07-15 10:18:42.176651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.440 [2024-07-15 10:18:42.177564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.440 [2024-07-15 10:18:42.177694] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x13a3a50 00:11:17.440 [2024-07-15 10:18:42.177704] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.440 [2024-07-15 10:18:42.177831] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11f8140 00:11:17.440 [2024-07-15 10:18:42.177947] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13a3a50 00:11:17.440 [2024-07-15 10:18:42.177954] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13a3a50 00:11:17.440 [2024-07-15 10:18:42.178022] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.440 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.696 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:17.696 "name": "raid_bdev1", 00:11:17.696 "uuid": "d5e8af93-bfcf-4bd2-acdb-e619a4870dee", 00:11:17.696 "strip_size_kb": 0, 00:11:17.696 "state": "online", 00:11:17.696 "raid_level": "raid1", 00:11:17.696 "superblock": true, 00:11:17.696 "num_base_bdevs": 2, 00:11:17.696 "num_base_bdevs_discovered": 2, 00:11:17.696 "num_base_bdevs_operational": 2, 00:11:17.696 "base_bdevs_list": [ 00:11:17.696 { 00:11:17.696 "name": "BaseBdev1", 00:11:17.696 "uuid": "c70d1898-4571-57b5-a3a2-6664a890e417", 00:11:17.696 "is_configured": true, 00:11:17.696 "data_offset": 2048, 00:11:17.696 "data_size": 63488 00:11:17.696 }, 00:11:17.696 { 00:11:17.696 "name": "BaseBdev2", 00:11:17.696 "uuid": "dbe77290-a503-5c61-a29c-00a7ae8c8279", 00:11:17.696 "is_configured": true, 00:11:17.696 "data_offset": 2048, 00:11:17.696 "data_size": 63488 00:11:17.696 } 00:11:17.696 ] 00:11:17.696 }' 00:11:17.696 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:17.696 10:18:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.260 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:18.260 10:18:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:18.260 [2024-07-15 10:18:42.938815] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x139e9d0 00:11:19.191 10:18:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:19.448 [2024-07-15 10:18:44.023156] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:19.448 [2024-07-15 10:18:44.023201] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.448 [2024-07-15 10:18:44.023370] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x139e9d0 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.448 "name": "raid_bdev1", 00:11:19.448 "uuid": "d5e8af93-bfcf-4bd2-acdb-e619a4870dee", 00:11:19.448 "strip_size_kb": 0, 00:11:19.448 "state": "online", 00:11:19.448 "raid_level": "raid1", 00:11:19.448 "superblock": true, 00:11:19.448 "num_base_bdevs": 2, 00:11:19.448 "num_base_bdevs_discovered": 1, 00:11:19.448 "num_base_bdevs_operational": 1, 00:11:19.448 "base_bdevs_list": [ 00:11:19.448 { 00:11:19.448 "name": null, 00:11:19.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.448 "is_configured": false, 00:11:19.448 "data_offset": 2048, 00:11:19.448 "data_size": 63488 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "name": "BaseBdev2", 00:11:19.448 "uuid": "dbe77290-a503-5c61-a29c-00a7ae8c8279", 00:11:19.448 "is_configured": true, 00:11:19.448 "data_offset": 2048, 00:11:19.448 "data_size": 63488 00:11:19.448 } 00:11:19.448 ] 00:11:19.448 }' 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.448 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.011 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:20.268 [2024-07-15 10:18:44.835754] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.268 [2024-07-15 10:18:44.835782] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.268 [2024-07-15 10:18:44.837694] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.268 [2024-07-15 10:18:44.837713] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.268 [2024-07-15 10:18:44.837746] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.268 [2024-07-15 10:18:44.837753] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13a3a50 name raid_bdev1, state offline 00:11:20.268 0 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1763099 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1763099 ']' 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1763099 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1763099 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1763099' 00:11:20.268 killing process with pid 1763099 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1763099 00:11:20.268 [2024-07-15 10:18:44.905279] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.268 10:18:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1763099 00:11:20.268 [2024-07-15 10:18:44.914118] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.AUmTcgJDf7 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:20.525 00:11:20.525 real 0m4.947s 00:11:20.525 user 0m7.426s 00:11:20.525 sys 0m0.900s 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.525 10:18:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.525 ************************************ 00:11:20.525 END TEST raid_write_error_test 00:11:20.525 ************************************ 00:11:20.525 10:18:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:20.525 10:18:45 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:11:20.525 10:18:45 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:20.525 10:18:45 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:20.525 10:18:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:20.525 10:18:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.525 10:18:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.525 ************************************ 00:11:20.525 START TEST raid_state_function_test 00:11:20.525 ************************************ 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1764000 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1764000' 00:11:20.526 Process raid pid: 1764000 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1764000 /var/tmp/spdk-raid.sock 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1764000 ']' 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:20.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.526 10:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.526 [2024-07-15 10:18:45.252779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:20.526 [2024-07-15 10:18:45.252824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:20.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:20.526 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:20.783 [2024-07-15 10:18:45.344848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.783 [2024-07-15 10:18:45.414168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.783 [2024-07-15 10:18:45.463142] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.783 [2024-07-15 10:18:45.463166] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.345 10:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.345 10:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:21.345 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:21.601 [2024-07-15 10:18:46.197896] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.601 [2024-07-15 10:18:46.197938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.601 [2024-07-15 10:18:46.197948] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.601 [2024-07-15 10:18:46.197958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.601 [2024-07-15 10:18:46.197965] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.601 [2024-07-15 10:18:46.197975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.601 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:21.601 "name": "Existed_Raid", 00:11:21.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.601 "strip_size_kb": 64, 00:11:21.601 "state": "configuring", 00:11:21.601 "raid_level": "raid0", 00:11:21.601 "superblock": false, 00:11:21.601 "num_base_bdevs": 3, 00:11:21.601 "num_base_bdevs_discovered": 0, 00:11:21.601 "num_base_bdevs_operational": 3, 00:11:21.601 "base_bdevs_list": [ 00:11:21.601 { 00:11:21.601 "name": "BaseBdev1", 00:11:21.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.601 "is_configured": false, 00:11:21.601 "data_offset": 0, 00:11:21.601 "data_size": 0 00:11:21.601 }, 00:11:21.602 { 00:11:21.602 "name": "BaseBdev2", 00:11:21.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.602 "is_configured": false, 00:11:21.602 "data_offset": 0, 00:11:21.602 "data_size": 0 00:11:21.602 }, 00:11:21.602 { 00:11:21.602 "name": "BaseBdev3", 00:11:21.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.602 "is_configured": false, 00:11:21.602 "data_offset": 0, 00:11:21.602 "data_size": 0 00:11:21.602 } 00:11:21.602 ] 00:11:21.602 }' 00:11:21.602 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:21.602 10:18:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.164 10:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:22.421 [2024-07-15 10:18:47.019925] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.421 [2024-07-15 10:18:47.019948] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d1ef40 name Existed_Raid, state configuring 00:11:22.421 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:22.421 [2024-07-15 10:18:47.196390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.422 [2024-07-15 10:18:47.196411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.422 [2024-07-15 10:18:47.196419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.422 [2024-07-15 10:18:47.196431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.422 [2024-07-15 10:18:47.196438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.422 [2024-07-15 10:18:47.196448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.679 [2024-07-15 10:18:47.377415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.679 BaseBdev1 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:22.679 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:22.937 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.937 [ 00:11:22.937 { 00:11:22.937 "name": "BaseBdev1", 00:11:22.937 "aliases": [ 00:11:22.937 "9fa3488b-df8d-4512-a653-c170644d17db" 00:11:22.937 ], 00:11:22.937 "product_name": "Malloc disk", 00:11:22.937 "block_size": 512, 00:11:22.937 "num_blocks": 65536, 00:11:22.937 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:22.937 "assigned_rate_limits": { 00:11:22.937 "rw_ios_per_sec": 0, 00:11:22.937 "rw_mbytes_per_sec": 0, 00:11:22.937 "r_mbytes_per_sec": 0, 00:11:22.937 "w_mbytes_per_sec": 0 00:11:22.937 }, 00:11:22.937 "claimed": true, 00:11:22.937 "claim_type": "exclusive_write", 00:11:22.937 "zoned": false, 00:11:22.937 "supported_io_types": { 00:11:22.937 "read": true, 00:11:22.937 "write": true, 00:11:22.937 "unmap": true, 00:11:22.937 "flush": true, 00:11:22.937 "reset": true, 00:11:22.937 "nvme_admin": false, 00:11:22.937 "nvme_io": false, 00:11:22.937 "nvme_io_md": false, 00:11:22.937 "write_zeroes": true, 00:11:22.937 "zcopy": true, 00:11:22.937 "get_zone_info": false, 00:11:22.937 "zone_management": false, 00:11:22.937 "zone_append": false, 00:11:22.937 "compare": false, 00:11:22.937 "compare_and_write": false, 00:11:22.937 "abort": true, 00:11:22.937 "seek_hole": false, 00:11:22.937 "seek_data": false, 00:11:22.937 "copy": true, 00:11:22.937 "nvme_iov_md": false 00:11:22.937 }, 00:11:22.937 "memory_domains": [ 00:11:22.937 { 00:11:22.937 "dma_device_id": "system", 00:11:22.937 "dma_device_type": 1 00:11:22.937 }, 00:11:22.937 { 00:11:22.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.937 "dma_device_type": 2 00:11:22.937 } 00:11:22.937 ], 00:11:22.938 "driver_specific": {} 00:11:22.938 } 00:11:22.938 ] 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.195 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.195 "name": "Existed_Raid", 00:11:23.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.195 "strip_size_kb": 64, 00:11:23.195 "state": "configuring", 00:11:23.195 "raid_level": "raid0", 00:11:23.195 "superblock": false, 00:11:23.195 "num_base_bdevs": 3, 00:11:23.195 "num_base_bdevs_discovered": 1, 00:11:23.195 "num_base_bdevs_operational": 3, 00:11:23.195 "base_bdevs_list": [ 00:11:23.195 { 00:11:23.195 "name": "BaseBdev1", 00:11:23.195 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:23.195 "is_configured": true, 00:11:23.196 "data_offset": 0, 00:11:23.196 "data_size": 65536 00:11:23.196 }, 00:11:23.196 { 00:11:23.196 "name": "BaseBdev2", 00:11:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.196 "is_configured": false, 00:11:23.196 "data_offset": 0, 00:11:23.196 "data_size": 0 00:11:23.196 }, 00:11:23.196 { 00:11:23.196 "name": "BaseBdev3", 00:11:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.196 "is_configured": false, 00:11:23.196 "data_offset": 0, 00:11:23.196 "data_size": 0 00:11:23.196 } 00:11:23.196 ] 00:11:23.196 }' 00:11:23.196 10:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.196 10:18:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.761 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:23.761 [2024-07-15 10:18:48.508316] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.761 [2024-07-15 10:18:48.508348] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d1e810 name Existed_Raid, state configuring 00:11:23.761 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:24.019 [2024-07-15 10:18:48.680780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.019 [2024-07-15 10:18:48.681851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.019 [2024-07-15 10:18:48.681880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.019 [2024-07-15 10:18:48.681890] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.019 [2024-07-15 10:18:48.681907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:24.019 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.020 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.277 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.277 "name": "Existed_Raid", 00:11:24.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.277 "strip_size_kb": 64, 00:11:24.277 "state": "configuring", 00:11:24.277 "raid_level": "raid0", 00:11:24.277 "superblock": false, 00:11:24.277 "num_base_bdevs": 3, 00:11:24.277 "num_base_bdevs_discovered": 1, 00:11:24.277 "num_base_bdevs_operational": 3, 00:11:24.277 "base_bdevs_list": [ 00:11:24.277 { 00:11:24.277 "name": "BaseBdev1", 00:11:24.277 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:24.277 "is_configured": true, 00:11:24.277 "data_offset": 0, 00:11:24.277 "data_size": 65536 00:11:24.277 }, 00:11:24.277 { 00:11:24.277 "name": "BaseBdev2", 00:11:24.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.277 "is_configured": false, 00:11:24.277 "data_offset": 0, 00:11:24.277 "data_size": 0 00:11:24.277 }, 00:11:24.277 { 00:11:24.277 "name": "BaseBdev3", 00:11:24.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.277 "is_configured": false, 00:11:24.277 "data_offset": 0, 00:11:24.277 "data_size": 0 00:11:24.277 } 00:11:24.277 ] 00:11:24.277 }' 00:11:24.277 10:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.277 10:18:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.843 [2024-07-15 10:18:49.537606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.843 BaseBdev2 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:24.843 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:25.101 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.102 [ 00:11:25.102 { 00:11:25.102 "name": "BaseBdev2", 00:11:25.102 "aliases": [ 00:11:25.102 "6ad584de-ea79-46a0-8249-85d5f7ac016f" 00:11:25.102 ], 00:11:25.102 "product_name": "Malloc disk", 00:11:25.102 "block_size": 512, 00:11:25.102 "num_blocks": 65536, 00:11:25.102 "uuid": "6ad584de-ea79-46a0-8249-85d5f7ac016f", 00:11:25.102 "assigned_rate_limits": { 00:11:25.102 "rw_ios_per_sec": 0, 00:11:25.102 "rw_mbytes_per_sec": 0, 00:11:25.102 "r_mbytes_per_sec": 0, 00:11:25.102 "w_mbytes_per_sec": 0 00:11:25.102 }, 00:11:25.102 "claimed": true, 00:11:25.102 "claim_type": "exclusive_write", 00:11:25.102 "zoned": false, 00:11:25.102 "supported_io_types": { 00:11:25.102 "read": true, 00:11:25.102 "write": true, 00:11:25.102 "unmap": true, 00:11:25.102 "flush": true, 00:11:25.102 "reset": true, 00:11:25.102 "nvme_admin": false, 00:11:25.102 "nvme_io": false, 00:11:25.102 "nvme_io_md": false, 00:11:25.102 "write_zeroes": true, 00:11:25.102 "zcopy": true, 00:11:25.102 "get_zone_info": false, 00:11:25.102 "zone_management": false, 00:11:25.102 "zone_append": false, 00:11:25.102 "compare": false, 00:11:25.102 "compare_and_write": false, 00:11:25.102 "abort": true, 00:11:25.102 "seek_hole": false, 00:11:25.102 "seek_data": false, 00:11:25.102 "copy": true, 00:11:25.102 "nvme_iov_md": false 00:11:25.102 }, 00:11:25.102 "memory_domains": [ 00:11:25.102 { 00:11:25.102 "dma_device_id": "system", 00:11:25.102 "dma_device_type": 1 00:11:25.102 }, 00:11:25.102 { 00:11:25.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.102 "dma_device_type": 2 00:11:25.102 } 00:11:25.102 ], 00:11:25.102 "driver_specific": {} 00:11:25.102 } 00:11:25.102 ] 00:11:25.102 10:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:25.102 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:25.359 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:25.359 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.360 10:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.360 10:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.360 "name": "Existed_Raid", 00:11:25.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.360 "strip_size_kb": 64, 00:11:25.360 "state": "configuring", 00:11:25.360 "raid_level": "raid0", 00:11:25.360 "superblock": false, 00:11:25.360 "num_base_bdevs": 3, 00:11:25.360 "num_base_bdevs_discovered": 2, 00:11:25.360 "num_base_bdevs_operational": 3, 00:11:25.360 "base_bdevs_list": [ 00:11:25.360 { 00:11:25.360 "name": "BaseBdev1", 00:11:25.360 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:25.360 "is_configured": true, 00:11:25.360 "data_offset": 0, 00:11:25.360 "data_size": 65536 00:11:25.360 }, 00:11:25.360 { 00:11:25.360 "name": "BaseBdev2", 00:11:25.360 "uuid": "6ad584de-ea79-46a0-8249-85d5f7ac016f", 00:11:25.360 "is_configured": true, 00:11:25.360 "data_offset": 0, 00:11:25.360 "data_size": 65536 00:11:25.360 }, 00:11:25.360 { 00:11:25.360 "name": "BaseBdev3", 00:11:25.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.360 "is_configured": false, 00:11:25.360 "data_offset": 0, 00:11:25.360 "data_size": 0 00:11:25.360 } 00:11:25.360 ] 00:11:25.360 }' 00:11:25.360 10:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.360 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.925 [2024-07-15 10:18:50.691502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.925 [2024-07-15 10:18:50.691532] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d1f700 00:11:25.925 [2024-07-15 10:18:50.691538] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:25.925 [2024-07-15 10:18:50.691665] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d1f3d0 00:11:25.925 [2024-07-15 10:18:50.691746] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d1f700 00:11:25.925 [2024-07-15 10:18:50.691752] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d1f700 00:11:25.925 [2024-07-15 10:18:50.691872] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.925 BaseBdev3 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:25.925 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:26.183 10:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.440 [ 00:11:26.440 { 00:11:26.440 "name": "BaseBdev3", 00:11:26.440 "aliases": [ 00:11:26.440 "a320813a-4b6d-4beb-96a9-7268940aa920" 00:11:26.440 ], 00:11:26.440 "product_name": "Malloc disk", 00:11:26.440 "block_size": 512, 00:11:26.440 "num_blocks": 65536, 00:11:26.440 "uuid": "a320813a-4b6d-4beb-96a9-7268940aa920", 00:11:26.440 "assigned_rate_limits": { 00:11:26.440 "rw_ios_per_sec": 0, 00:11:26.440 "rw_mbytes_per_sec": 0, 00:11:26.440 "r_mbytes_per_sec": 0, 00:11:26.440 "w_mbytes_per_sec": 0 00:11:26.440 }, 00:11:26.440 "claimed": true, 00:11:26.440 "claim_type": "exclusive_write", 00:11:26.440 "zoned": false, 00:11:26.440 "supported_io_types": { 00:11:26.440 "read": true, 00:11:26.440 "write": true, 00:11:26.440 "unmap": true, 00:11:26.440 "flush": true, 00:11:26.440 "reset": true, 00:11:26.440 "nvme_admin": false, 00:11:26.440 "nvme_io": false, 00:11:26.440 "nvme_io_md": false, 00:11:26.440 "write_zeroes": true, 00:11:26.440 "zcopy": true, 00:11:26.440 "get_zone_info": false, 00:11:26.440 "zone_management": false, 00:11:26.440 "zone_append": false, 00:11:26.440 "compare": false, 00:11:26.440 "compare_and_write": false, 00:11:26.440 "abort": true, 00:11:26.440 "seek_hole": false, 00:11:26.440 "seek_data": false, 00:11:26.440 "copy": true, 00:11:26.440 "nvme_iov_md": false 00:11:26.440 }, 00:11:26.440 "memory_domains": [ 00:11:26.440 { 00:11:26.440 "dma_device_id": "system", 00:11:26.440 "dma_device_type": 1 00:11:26.440 }, 00:11:26.440 { 00:11:26.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.440 "dma_device_type": 2 00:11:26.440 } 00:11:26.440 ], 00:11:26.440 "driver_specific": {} 00:11:26.440 } 00:11:26.440 ] 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.440 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:26.440 "name": "Existed_Raid", 00:11:26.440 "uuid": "3b291b71-14d8-41f4-9aaf-e41c54d24b17", 00:11:26.440 "strip_size_kb": 64, 00:11:26.440 "state": "online", 00:11:26.440 "raid_level": "raid0", 00:11:26.440 "superblock": false, 00:11:26.440 "num_base_bdevs": 3, 00:11:26.440 "num_base_bdevs_discovered": 3, 00:11:26.440 "num_base_bdevs_operational": 3, 00:11:26.440 "base_bdevs_list": [ 00:11:26.440 { 00:11:26.440 "name": "BaseBdev1", 00:11:26.440 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:26.440 "is_configured": true, 00:11:26.440 "data_offset": 0, 00:11:26.440 "data_size": 65536 00:11:26.440 }, 00:11:26.440 { 00:11:26.441 "name": "BaseBdev2", 00:11:26.441 "uuid": "6ad584de-ea79-46a0-8249-85d5f7ac016f", 00:11:26.441 "is_configured": true, 00:11:26.441 "data_offset": 0, 00:11:26.441 "data_size": 65536 00:11:26.441 }, 00:11:26.441 { 00:11:26.441 "name": "BaseBdev3", 00:11:26.441 "uuid": "a320813a-4b6d-4beb-96a9-7268940aa920", 00:11:26.441 "is_configured": true, 00:11:26.441 "data_offset": 0, 00:11:26.441 "data_size": 65536 00:11:26.441 } 00:11:26.441 ] 00:11:26.441 }' 00:11:26.441 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:26.441 10:18:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:27.004 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:27.261 [2024-07-15 10:18:51.854696] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.261 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:27.261 "name": "Existed_Raid", 00:11:27.261 "aliases": [ 00:11:27.261 "3b291b71-14d8-41f4-9aaf-e41c54d24b17" 00:11:27.261 ], 00:11:27.261 "product_name": "Raid Volume", 00:11:27.261 "block_size": 512, 00:11:27.261 "num_blocks": 196608, 00:11:27.261 "uuid": "3b291b71-14d8-41f4-9aaf-e41c54d24b17", 00:11:27.261 "assigned_rate_limits": { 00:11:27.261 "rw_ios_per_sec": 0, 00:11:27.261 "rw_mbytes_per_sec": 0, 00:11:27.261 "r_mbytes_per_sec": 0, 00:11:27.261 "w_mbytes_per_sec": 0 00:11:27.261 }, 00:11:27.261 "claimed": false, 00:11:27.261 "zoned": false, 00:11:27.261 "supported_io_types": { 00:11:27.261 "read": true, 00:11:27.261 "write": true, 00:11:27.261 "unmap": true, 00:11:27.261 "flush": true, 00:11:27.261 "reset": true, 00:11:27.261 "nvme_admin": false, 00:11:27.261 "nvme_io": false, 00:11:27.261 "nvme_io_md": false, 00:11:27.261 "write_zeroes": true, 00:11:27.262 "zcopy": false, 00:11:27.262 "get_zone_info": false, 00:11:27.262 "zone_management": false, 00:11:27.262 "zone_append": false, 00:11:27.262 "compare": false, 00:11:27.262 "compare_and_write": false, 00:11:27.262 "abort": false, 00:11:27.262 "seek_hole": false, 00:11:27.262 "seek_data": false, 00:11:27.262 "copy": false, 00:11:27.262 "nvme_iov_md": false 00:11:27.262 }, 00:11:27.262 "memory_domains": [ 00:11:27.262 { 00:11:27.262 "dma_device_id": "system", 00:11:27.262 "dma_device_type": 1 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.262 "dma_device_type": 2 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "dma_device_id": "system", 00:11:27.262 "dma_device_type": 1 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.262 "dma_device_type": 2 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "dma_device_id": "system", 00:11:27.262 "dma_device_type": 1 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.262 "dma_device_type": 2 00:11:27.262 } 00:11:27.262 ], 00:11:27.262 "driver_specific": { 00:11:27.262 "raid": { 00:11:27.262 "uuid": "3b291b71-14d8-41f4-9aaf-e41c54d24b17", 00:11:27.262 "strip_size_kb": 64, 00:11:27.262 "state": "online", 00:11:27.262 "raid_level": "raid0", 00:11:27.262 "superblock": false, 00:11:27.262 "num_base_bdevs": 3, 00:11:27.262 "num_base_bdevs_discovered": 3, 00:11:27.262 "num_base_bdevs_operational": 3, 00:11:27.262 "base_bdevs_list": [ 00:11:27.262 { 00:11:27.262 "name": "BaseBdev1", 00:11:27.262 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:27.262 "is_configured": true, 00:11:27.262 "data_offset": 0, 00:11:27.262 "data_size": 65536 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "name": "BaseBdev2", 00:11:27.262 "uuid": "6ad584de-ea79-46a0-8249-85d5f7ac016f", 00:11:27.262 "is_configured": true, 00:11:27.262 "data_offset": 0, 00:11:27.262 "data_size": 65536 00:11:27.262 }, 00:11:27.262 { 00:11:27.262 "name": "BaseBdev3", 00:11:27.262 "uuid": "a320813a-4b6d-4beb-96a9-7268940aa920", 00:11:27.262 "is_configured": true, 00:11:27.262 "data_offset": 0, 00:11:27.262 "data_size": 65536 00:11:27.262 } 00:11:27.262 ] 00:11:27.262 } 00:11:27.262 } 00:11:27.262 }' 00:11:27.262 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.262 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:27.262 BaseBdev2 00:11:27.262 BaseBdev3' 00:11:27.262 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:27.262 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:27.262 10:18:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:27.520 "name": "BaseBdev1", 00:11:27.520 "aliases": [ 00:11:27.520 "9fa3488b-df8d-4512-a653-c170644d17db" 00:11:27.520 ], 00:11:27.520 "product_name": "Malloc disk", 00:11:27.520 "block_size": 512, 00:11:27.520 "num_blocks": 65536, 00:11:27.520 "uuid": "9fa3488b-df8d-4512-a653-c170644d17db", 00:11:27.520 "assigned_rate_limits": { 00:11:27.520 "rw_ios_per_sec": 0, 00:11:27.520 "rw_mbytes_per_sec": 0, 00:11:27.520 "r_mbytes_per_sec": 0, 00:11:27.520 "w_mbytes_per_sec": 0 00:11:27.520 }, 00:11:27.520 "claimed": true, 00:11:27.520 "claim_type": "exclusive_write", 00:11:27.520 "zoned": false, 00:11:27.520 "supported_io_types": { 00:11:27.520 "read": true, 00:11:27.520 "write": true, 00:11:27.520 "unmap": true, 00:11:27.520 "flush": true, 00:11:27.520 "reset": true, 00:11:27.520 "nvme_admin": false, 00:11:27.520 "nvme_io": false, 00:11:27.520 "nvme_io_md": false, 00:11:27.520 "write_zeroes": true, 00:11:27.520 "zcopy": true, 00:11:27.520 "get_zone_info": false, 00:11:27.520 "zone_management": false, 00:11:27.520 "zone_append": false, 00:11:27.520 "compare": false, 00:11:27.520 "compare_and_write": false, 00:11:27.520 "abort": true, 00:11:27.520 "seek_hole": false, 00:11:27.520 "seek_data": false, 00:11:27.520 "copy": true, 00:11:27.520 "nvme_iov_md": false 00:11:27.520 }, 00:11:27.520 "memory_domains": [ 00:11:27.520 { 00:11:27.520 "dma_device_id": "system", 00:11:27.520 "dma_device_type": 1 00:11:27.520 }, 00:11:27.520 { 00:11:27.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.520 "dma_device_type": 2 00:11:27.520 } 00:11:27.520 ], 00:11:27.520 "driver_specific": {} 00:11:27.520 }' 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:27.520 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:27.778 "name": "BaseBdev2", 00:11:27.778 "aliases": [ 00:11:27.778 "6ad584de-ea79-46a0-8249-85d5f7ac016f" 00:11:27.778 ], 00:11:27.778 "product_name": "Malloc disk", 00:11:27.778 "block_size": 512, 00:11:27.778 "num_blocks": 65536, 00:11:27.778 "uuid": "6ad584de-ea79-46a0-8249-85d5f7ac016f", 00:11:27.778 "assigned_rate_limits": { 00:11:27.778 "rw_ios_per_sec": 0, 00:11:27.778 "rw_mbytes_per_sec": 0, 00:11:27.778 "r_mbytes_per_sec": 0, 00:11:27.778 "w_mbytes_per_sec": 0 00:11:27.778 }, 00:11:27.778 "claimed": true, 00:11:27.778 "claim_type": "exclusive_write", 00:11:27.778 "zoned": false, 00:11:27.778 "supported_io_types": { 00:11:27.778 "read": true, 00:11:27.778 "write": true, 00:11:27.778 "unmap": true, 00:11:27.778 "flush": true, 00:11:27.778 "reset": true, 00:11:27.778 "nvme_admin": false, 00:11:27.778 "nvme_io": false, 00:11:27.778 "nvme_io_md": false, 00:11:27.778 "write_zeroes": true, 00:11:27.778 "zcopy": true, 00:11:27.778 "get_zone_info": false, 00:11:27.778 "zone_management": false, 00:11:27.778 "zone_append": false, 00:11:27.778 "compare": false, 00:11:27.778 "compare_and_write": false, 00:11:27.778 "abort": true, 00:11:27.778 "seek_hole": false, 00:11:27.778 "seek_data": false, 00:11:27.778 "copy": true, 00:11:27.778 "nvme_iov_md": false 00:11:27.778 }, 00:11:27.778 "memory_domains": [ 00:11:27.778 { 00:11:27.778 "dma_device_id": "system", 00:11:27.778 "dma_device_type": 1 00:11:27.778 }, 00:11:27.778 { 00:11:27.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.778 "dma_device_type": 2 00:11:27.778 } 00:11:27.778 ], 00:11:27.778 "driver_specific": {} 00:11:27.778 }' 00:11:27.778 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.036 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.294 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.294 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.294 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.294 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.294 10:18:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:28.294 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.294 "name": "BaseBdev3", 00:11:28.294 "aliases": [ 00:11:28.294 "a320813a-4b6d-4beb-96a9-7268940aa920" 00:11:28.294 ], 00:11:28.294 "product_name": "Malloc disk", 00:11:28.294 "block_size": 512, 00:11:28.294 "num_blocks": 65536, 00:11:28.294 "uuid": "a320813a-4b6d-4beb-96a9-7268940aa920", 00:11:28.294 "assigned_rate_limits": { 00:11:28.294 "rw_ios_per_sec": 0, 00:11:28.294 "rw_mbytes_per_sec": 0, 00:11:28.294 "r_mbytes_per_sec": 0, 00:11:28.294 "w_mbytes_per_sec": 0 00:11:28.294 }, 00:11:28.294 "claimed": true, 00:11:28.294 "claim_type": "exclusive_write", 00:11:28.294 "zoned": false, 00:11:28.294 "supported_io_types": { 00:11:28.294 "read": true, 00:11:28.294 "write": true, 00:11:28.294 "unmap": true, 00:11:28.294 "flush": true, 00:11:28.294 "reset": true, 00:11:28.294 "nvme_admin": false, 00:11:28.294 "nvme_io": false, 00:11:28.294 "nvme_io_md": false, 00:11:28.294 "write_zeroes": true, 00:11:28.294 "zcopy": true, 00:11:28.294 "get_zone_info": false, 00:11:28.294 "zone_management": false, 00:11:28.294 "zone_append": false, 00:11:28.294 "compare": false, 00:11:28.294 "compare_and_write": false, 00:11:28.294 "abort": true, 00:11:28.294 "seek_hole": false, 00:11:28.294 "seek_data": false, 00:11:28.294 "copy": true, 00:11:28.294 "nvme_iov_md": false 00:11:28.294 }, 00:11:28.294 "memory_domains": [ 00:11:28.294 { 00:11:28.294 "dma_device_id": "system", 00:11:28.294 "dma_device_type": 1 00:11:28.294 }, 00:11:28.294 { 00:11:28.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.294 "dma_device_type": 2 00:11:28.294 } 00:11:28.294 ], 00:11:28.294 "driver_specific": {} 00:11:28.294 }' 00:11:28.294 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.552 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:28.808 [2024-07-15 10:18:53.462682] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.808 [2024-07-15 10:18:53.462706] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.808 [2024-07-15 10:18:53.462734] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.808 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.066 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.066 "name": "Existed_Raid", 00:11:29.066 "uuid": "3b291b71-14d8-41f4-9aaf-e41c54d24b17", 00:11:29.066 "strip_size_kb": 64, 00:11:29.066 "state": "offline", 00:11:29.066 "raid_level": "raid0", 00:11:29.066 "superblock": false, 00:11:29.066 "num_base_bdevs": 3, 00:11:29.066 "num_base_bdevs_discovered": 2, 00:11:29.066 "num_base_bdevs_operational": 2, 00:11:29.066 "base_bdevs_list": [ 00:11:29.066 { 00:11:29.066 "name": null, 00:11:29.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.066 "is_configured": false, 00:11:29.066 "data_offset": 0, 00:11:29.066 "data_size": 65536 00:11:29.066 }, 00:11:29.066 { 00:11:29.066 "name": "BaseBdev2", 00:11:29.066 "uuid": "6ad584de-ea79-46a0-8249-85d5f7ac016f", 00:11:29.066 "is_configured": true, 00:11:29.066 "data_offset": 0, 00:11:29.066 "data_size": 65536 00:11:29.066 }, 00:11:29.066 { 00:11:29.066 "name": "BaseBdev3", 00:11:29.066 "uuid": "a320813a-4b6d-4beb-96a9-7268940aa920", 00:11:29.066 "is_configured": true, 00:11:29.066 "data_offset": 0, 00:11:29.066 "data_size": 65536 00:11:29.066 } 00:11:29.066 ] 00:11:29.066 }' 00:11:29.066 10:18:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.066 10:18:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.630 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:29.898 [2024-07-15 10:18:54.430052] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.898 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:30.183 [2024-07-15 10:18:54.792459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:30.183 [2024-07-15 10:18:54.792492] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d1f700 name Existed_Raid, state offline 00:11:30.183 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:30.183 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:30.183 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.183 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:30.441 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:30.441 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:30.441 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:30.441 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:30.441 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:30.441 10:18:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.441 BaseBdev2 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:30.441 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.699 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.699 [ 00:11:30.699 { 00:11:30.699 "name": "BaseBdev2", 00:11:30.699 "aliases": [ 00:11:30.699 "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79" 00:11:30.699 ], 00:11:30.699 "product_name": "Malloc disk", 00:11:30.699 "block_size": 512, 00:11:30.699 "num_blocks": 65536, 00:11:30.699 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:30.699 "assigned_rate_limits": { 00:11:30.699 "rw_ios_per_sec": 0, 00:11:30.699 "rw_mbytes_per_sec": 0, 00:11:30.699 "r_mbytes_per_sec": 0, 00:11:30.699 "w_mbytes_per_sec": 0 00:11:30.699 }, 00:11:30.699 "claimed": false, 00:11:30.699 "zoned": false, 00:11:30.699 "supported_io_types": { 00:11:30.699 "read": true, 00:11:30.699 "write": true, 00:11:30.699 "unmap": true, 00:11:30.699 "flush": true, 00:11:30.699 "reset": true, 00:11:30.699 "nvme_admin": false, 00:11:30.699 "nvme_io": false, 00:11:30.699 "nvme_io_md": false, 00:11:30.699 "write_zeroes": true, 00:11:30.699 "zcopy": true, 00:11:30.699 "get_zone_info": false, 00:11:30.699 "zone_management": false, 00:11:30.699 "zone_append": false, 00:11:30.699 "compare": false, 00:11:30.699 "compare_and_write": false, 00:11:30.699 "abort": true, 00:11:30.699 "seek_hole": false, 00:11:30.699 "seek_data": false, 00:11:30.699 "copy": true, 00:11:30.699 "nvme_iov_md": false 00:11:30.699 }, 00:11:30.699 "memory_domains": [ 00:11:30.699 { 00:11:30.699 "dma_device_id": "system", 00:11:30.699 "dma_device_type": 1 00:11:30.699 }, 00:11:30.699 { 00:11:30.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.699 "dma_device_type": 2 00:11:30.699 } 00:11:30.699 ], 00:11:30.699 "driver_specific": {} 00:11:30.699 } 00:11:30.699 ] 00:11:30.699 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:30.699 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:30.699 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:30.699 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:30.980 BaseBdev3 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:30.980 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:31.239 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.239 [ 00:11:31.239 { 00:11:31.239 "name": "BaseBdev3", 00:11:31.239 "aliases": [ 00:11:31.239 "96e730b9-d702-4a8a-8d72-01927961472c" 00:11:31.239 ], 00:11:31.239 "product_name": "Malloc disk", 00:11:31.239 "block_size": 512, 00:11:31.239 "num_blocks": 65536, 00:11:31.239 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:31.239 "assigned_rate_limits": { 00:11:31.239 "rw_ios_per_sec": 0, 00:11:31.239 "rw_mbytes_per_sec": 0, 00:11:31.239 "r_mbytes_per_sec": 0, 00:11:31.239 "w_mbytes_per_sec": 0 00:11:31.239 }, 00:11:31.239 "claimed": false, 00:11:31.239 "zoned": false, 00:11:31.239 "supported_io_types": { 00:11:31.239 "read": true, 00:11:31.239 "write": true, 00:11:31.239 "unmap": true, 00:11:31.239 "flush": true, 00:11:31.239 "reset": true, 00:11:31.239 "nvme_admin": false, 00:11:31.239 "nvme_io": false, 00:11:31.239 "nvme_io_md": false, 00:11:31.239 "write_zeroes": true, 00:11:31.239 "zcopy": true, 00:11:31.239 "get_zone_info": false, 00:11:31.239 "zone_management": false, 00:11:31.239 "zone_append": false, 00:11:31.239 "compare": false, 00:11:31.239 "compare_and_write": false, 00:11:31.239 "abort": true, 00:11:31.239 "seek_hole": false, 00:11:31.239 "seek_data": false, 00:11:31.239 "copy": true, 00:11:31.239 "nvme_iov_md": false 00:11:31.239 }, 00:11:31.239 "memory_domains": [ 00:11:31.239 { 00:11:31.239 "dma_device_id": "system", 00:11:31.239 "dma_device_type": 1 00:11:31.239 }, 00:11:31.239 { 00:11:31.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.239 "dma_device_type": 2 00:11:31.239 } 00:11:31.239 ], 00:11:31.239 "driver_specific": {} 00:11:31.239 } 00:11:31.239 ] 00:11:31.239 10:18:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:31.239 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:31.239 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:31.239 10:18:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:31.497 [2024-07-15 10:18:56.149298] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.497 [2024-07-15 10:18:56.149333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.497 [2024-07-15 10:18:56.149344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.497 [2024-07-15 10:18:56.150252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.497 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.755 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.755 "name": "Existed_Raid", 00:11:31.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.755 "strip_size_kb": 64, 00:11:31.755 "state": "configuring", 00:11:31.755 "raid_level": "raid0", 00:11:31.755 "superblock": false, 00:11:31.755 "num_base_bdevs": 3, 00:11:31.755 "num_base_bdevs_discovered": 2, 00:11:31.755 "num_base_bdevs_operational": 3, 00:11:31.755 "base_bdevs_list": [ 00:11:31.755 { 00:11:31.755 "name": "BaseBdev1", 00:11:31.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.755 "is_configured": false, 00:11:31.755 "data_offset": 0, 00:11:31.755 "data_size": 0 00:11:31.755 }, 00:11:31.755 { 00:11:31.755 "name": "BaseBdev2", 00:11:31.755 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:31.755 "is_configured": true, 00:11:31.755 "data_offset": 0, 00:11:31.755 "data_size": 65536 00:11:31.755 }, 00:11:31.755 { 00:11:31.755 "name": "BaseBdev3", 00:11:31.755 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:31.755 "is_configured": true, 00:11:31.755 "data_offset": 0, 00:11:31.755 "data_size": 65536 00:11:31.755 } 00:11:31.755 ] 00:11:31.755 }' 00:11:31.755 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.755 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:32.334 [2024-07-15 10:18:56.979416] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.334 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.592 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.592 "name": "Existed_Raid", 00:11:32.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.592 "strip_size_kb": 64, 00:11:32.592 "state": "configuring", 00:11:32.592 "raid_level": "raid0", 00:11:32.592 "superblock": false, 00:11:32.592 "num_base_bdevs": 3, 00:11:32.592 "num_base_bdevs_discovered": 1, 00:11:32.592 "num_base_bdevs_operational": 3, 00:11:32.592 "base_bdevs_list": [ 00:11:32.592 { 00:11:32.592 "name": "BaseBdev1", 00:11:32.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.592 "is_configured": false, 00:11:32.592 "data_offset": 0, 00:11:32.592 "data_size": 0 00:11:32.592 }, 00:11:32.592 { 00:11:32.592 "name": null, 00:11:32.592 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:32.592 "is_configured": false, 00:11:32.592 "data_offset": 0, 00:11:32.592 "data_size": 65536 00:11:32.592 }, 00:11:32.592 { 00:11:32.592 "name": "BaseBdev3", 00:11:32.592 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:32.592 "is_configured": true, 00:11:32.592 "data_offset": 0, 00:11:32.592 "data_size": 65536 00:11:32.592 } 00:11:32.592 ] 00:11:32.592 }' 00:11:32.592 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.592 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.850 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.850 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.108 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:33.108 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.366 [2024-07-15 10:18:57.944823] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.366 BaseBdev1 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:33.366 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.366 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.625 [ 00:11:33.625 { 00:11:33.625 "name": "BaseBdev1", 00:11:33.625 "aliases": [ 00:11:33.625 "7160d54d-0f32-48e9-955e-1bc997e58274" 00:11:33.625 ], 00:11:33.625 "product_name": "Malloc disk", 00:11:33.625 "block_size": 512, 00:11:33.625 "num_blocks": 65536, 00:11:33.625 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:33.625 "assigned_rate_limits": { 00:11:33.625 "rw_ios_per_sec": 0, 00:11:33.625 "rw_mbytes_per_sec": 0, 00:11:33.625 "r_mbytes_per_sec": 0, 00:11:33.625 "w_mbytes_per_sec": 0 00:11:33.625 }, 00:11:33.625 "claimed": true, 00:11:33.625 "claim_type": "exclusive_write", 00:11:33.625 "zoned": false, 00:11:33.625 "supported_io_types": { 00:11:33.625 "read": true, 00:11:33.625 "write": true, 00:11:33.625 "unmap": true, 00:11:33.625 "flush": true, 00:11:33.625 "reset": true, 00:11:33.625 "nvme_admin": false, 00:11:33.625 "nvme_io": false, 00:11:33.625 "nvme_io_md": false, 00:11:33.625 "write_zeroes": true, 00:11:33.625 "zcopy": true, 00:11:33.625 "get_zone_info": false, 00:11:33.625 "zone_management": false, 00:11:33.625 "zone_append": false, 00:11:33.625 "compare": false, 00:11:33.625 "compare_and_write": false, 00:11:33.625 "abort": true, 00:11:33.625 "seek_hole": false, 00:11:33.625 "seek_data": false, 00:11:33.625 "copy": true, 00:11:33.625 "nvme_iov_md": false 00:11:33.625 }, 00:11:33.625 "memory_domains": [ 00:11:33.625 { 00:11:33.625 "dma_device_id": "system", 00:11:33.625 "dma_device_type": 1 00:11:33.625 }, 00:11:33.625 { 00:11:33.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.625 "dma_device_type": 2 00:11:33.625 } 00:11:33.625 ], 00:11:33.625 "driver_specific": {} 00:11:33.625 } 00:11:33.625 ] 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.625 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.884 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.884 "name": "Existed_Raid", 00:11:33.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.884 "strip_size_kb": 64, 00:11:33.884 "state": "configuring", 00:11:33.884 "raid_level": "raid0", 00:11:33.884 "superblock": false, 00:11:33.884 "num_base_bdevs": 3, 00:11:33.884 "num_base_bdevs_discovered": 2, 00:11:33.884 "num_base_bdevs_operational": 3, 00:11:33.884 "base_bdevs_list": [ 00:11:33.884 { 00:11:33.884 "name": "BaseBdev1", 00:11:33.884 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:33.884 "is_configured": true, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "name": null, 00:11:33.884 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:33.884 "is_configured": false, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 }, 00:11:33.884 { 00:11:33.884 "name": "BaseBdev3", 00:11:33.884 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:33.884 "is_configured": true, 00:11:33.884 "data_offset": 0, 00:11:33.884 "data_size": 65536 00:11:33.884 } 00:11:33.884 ] 00:11:33.884 }' 00:11:33.884 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.884 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.142 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.142 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:34.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:34.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:34.659 [2024-07-15 10:18:59.220126] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.659 "name": "Existed_Raid", 00:11:34.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.659 "strip_size_kb": 64, 00:11:34.659 "state": "configuring", 00:11:34.659 "raid_level": "raid0", 00:11:34.659 "superblock": false, 00:11:34.659 "num_base_bdevs": 3, 00:11:34.659 "num_base_bdevs_discovered": 1, 00:11:34.659 "num_base_bdevs_operational": 3, 00:11:34.659 "base_bdevs_list": [ 00:11:34.659 { 00:11:34.659 "name": "BaseBdev1", 00:11:34.659 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:34.659 "is_configured": true, 00:11:34.659 "data_offset": 0, 00:11:34.659 "data_size": 65536 00:11:34.659 }, 00:11:34.659 { 00:11:34.659 "name": null, 00:11:34.659 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:34.659 "is_configured": false, 00:11:34.659 "data_offset": 0, 00:11:34.659 "data_size": 65536 00:11:34.659 }, 00:11:34.659 { 00:11:34.659 "name": null, 00:11:34.659 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:34.659 "is_configured": false, 00:11:34.659 "data_offset": 0, 00:11:34.659 "data_size": 65536 00:11:34.659 } 00:11:34.659 ] 00:11:34.659 }' 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.659 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.225 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:35.225 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:35.483 [2024-07-15 10:19:00.166582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.483 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.742 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:35.742 "name": "Existed_Raid", 00:11:35.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.742 "strip_size_kb": 64, 00:11:35.742 "state": "configuring", 00:11:35.742 "raid_level": "raid0", 00:11:35.742 "superblock": false, 00:11:35.742 "num_base_bdevs": 3, 00:11:35.742 "num_base_bdevs_discovered": 2, 00:11:35.742 "num_base_bdevs_operational": 3, 00:11:35.742 "base_bdevs_list": [ 00:11:35.742 { 00:11:35.742 "name": "BaseBdev1", 00:11:35.742 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:35.742 "is_configured": true, 00:11:35.742 "data_offset": 0, 00:11:35.742 "data_size": 65536 00:11:35.742 }, 00:11:35.742 { 00:11:35.742 "name": null, 00:11:35.742 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:35.742 "is_configured": false, 00:11:35.742 "data_offset": 0, 00:11:35.742 "data_size": 65536 00:11:35.742 }, 00:11:35.742 { 00:11:35.742 "name": "BaseBdev3", 00:11:35.742 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:35.742 "is_configured": true, 00:11:35.742 "data_offset": 0, 00:11:35.742 "data_size": 65536 00:11:35.742 } 00:11:35.742 ] 00:11:35.742 }' 00:11:35.742 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:35.742 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.307 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.307 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:36.307 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:36.307 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:36.566 [2024-07-15 10:19:01.169166] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.566 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.824 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.824 "name": "Existed_Raid", 00:11:36.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.824 "strip_size_kb": 64, 00:11:36.824 "state": "configuring", 00:11:36.824 "raid_level": "raid0", 00:11:36.824 "superblock": false, 00:11:36.824 "num_base_bdevs": 3, 00:11:36.824 "num_base_bdevs_discovered": 1, 00:11:36.824 "num_base_bdevs_operational": 3, 00:11:36.824 "base_bdevs_list": [ 00:11:36.824 { 00:11:36.824 "name": null, 00:11:36.824 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:36.824 "is_configured": false, 00:11:36.824 "data_offset": 0, 00:11:36.824 "data_size": 65536 00:11:36.824 }, 00:11:36.824 { 00:11:36.824 "name": null, 00:11:36.824 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:36.824 "is_configured": false, 00:11:36.824 "data_offset": 0, 00:11:36.824 "data_size": 65536 00:11:36.824 }, 00:11:36.824 { 00:11:36.824 "name": "BaseBdev3", 00:11:36.824 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:36.824 "is_configured": true, 00:11:36.824 "data_offset": 0, 00:11:36.824 "data_size": 65536 00:11:36.824 } 00:11:36.824 ] 00:11:36.824 }' 00:11:36.824 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.824 10:19:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.082 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.082 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:37.340 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:37.340 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:37.598 [2024-07-15 10:19:02.157432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.598 "name": "Existed_Raid", 00:11:37.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.598 "strip_size_kb": 64, 00:11:37.598 "state": "configuring", 00:11:37.598 "raid_level": "raid0", 00:11:37.598 "superblock": false, 00:11:37.598 "num_base_bdevs": 3, 00:11:37.598 "num_base_bdevs_discovered": 2, 00:11:37.598 "num_base_bdevs_operational": 3, 00:11:37.598 "base_bdevs_list": [ 00:11:37.598 { 00:11:37.598 "name": null, 00:11:37.598 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:37.598 "is_configured": false, 00:11:37.598 "data_offset": 0, 00:11:37.598 "data_size": 65536 00:11:37.598 }, 00:11:37.598 { 00:11:37.598 "name": "BaseBdev2", 00:11:37.598 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:37.598 "is_configured": true, 00:11:37.598 "data_offset": 0, 00:11:37.598 "data_size": 65536 00:11:37.598 }, 00:11:37.598 { 00:11:37.598 "name": "BaseBdev3", 00:11:37.598 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:37.598 "is_configured": true, 00:11:37.598 "data_offset": 0, 00:11:37.598 "data_size": 65536 00:11:37.598 } 00:11:37.598 ] 00:11:37.598 }' 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.598 10:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.162 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.162 10:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.420 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:38.420 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.420 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:38.420 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7160d54d-0f32-48e9-955e-1bc997e58274 00:11:38.677 [2024-07-15 10:19:03.359239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:38.677 [2024-07-15 10:19:03.359268] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d20a60 00:11:38.677 [2024-07-15 10:19:03.359273] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:38.677 [2024-07-15 10:19:03.359408] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ec71a0 00:11:38.677 [2024-07-15 10:19:03.359486] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d20a60 00:11:38.677 [2024-07-15 10:19:03.359493] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d20a60 00:11:38.677 [2024-07-15 10:19:03.359606] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.677 NewBaseBdev 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:38.677 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.935 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:38.935 [ 00:11:38.935 { 00:11:38.935 "name": "NewBaseBdev", 00:11:38.935 "aliases": [ 00:11:38.935 "7160d54d-0f32-48e9-955e-1bc997e58274" 00:11:38.935 ], 00:11:38.935 "product_name": "Malloc disk", 00:11:38.935 "block_size": 512, 00:11:38.935 "num_blocks": 65536, 00:11:38.935 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:38.935 "assigned_rate_limits": { 00:11:38.935 "rw_ios_per_sec": 0, 00:11:38.935 "rw_mbytes_per_sec": 0, 00:11:38.935 "r_mbytes_per_sec": 0, 00:11:38.935 "w_mbytes_per_sec": 0 00:11:38.935 }, 00:11:38.935 "claimed": true, 00:11:38.935 "claim_type": "exclusive_write", 00:11:38.935 "zoned": false, 00:11:38.935 "supported_io_types": { 00:11:38.935 "read": true, 00:11:38.935 "write": true, 00:11:38.935 "unmap": true, 00:11:38.935 "flush": true, 00:11:38.935 "reset": true, 00:11:38.935 "nvme_admin": false, 00:11:38.935 "nvme_io": false, 00:11:38.935 "nvme_io_md": false, 00:11:38.935 "write_zeroes": true, 00:11:38.935 "zcopy": true, 00:11:38.935 "get_zone_info": false, 00:11:38.935 "zone_management": false, 00:11:38.935 "zone_append": false, 00:11:38.935 "compare": false, 00:11:38.935 "compare_and_write": false, 00:11:38.935 "abort": true, 00:11:38.935 "seek_hole": false, 00:11:38.935 "seek_data": false, 00:11:38.935 "copy": true, 00:11:38.935 "nvme_iov_md": false 00:11:38.935 }, 00:11:38.935 "memory_domains": [ 00:11:38.935 { 00:11:38.935 "dma_device_id": "system", 00:11:38.935 "dma_device_type": 1 00:11:38.935 }, 00:11:38.935 { 00:11:38.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.936 "dma_device_type": 2 00:11:38.936 } 00:11:38.936 ], 00:11:38.936 "driver_specific": {} 00:11:38.936 } 00:11:38.936 ] 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.936 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.194 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.194 "name": "Existed_Raid", 00:11:39.194 "uuid": "6a982f6f-0b45-44ca-81cb-572a41c69a60", 00:11:39.194 "strip_size_kb": 64, 00:11:39.194 "state": "online", 00:11:39.194 "raid_level": "raid0", 00:11:39.194 "superblock": false, 00:11:39.194 "num_base_bdevs": 3, 00:11:39.194 "num_base_bdevs_discovered": 3, 00:11:39.194 "num_base_bdevs_operational": 3, 00:11:39.194 "base_bdevs_list": [ 00:11:39.194 { 00:11:39.194 "name": "NewBaseBdev", 00:11:39.194 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:39.194 "is_configured": true, 00:11:39.194 "data_offset": 0, 00:11:39.194 "data_size": 65536 00:11:39.194 }, 00:11:39.194 { 00:11:39.194 "name": "BaseBdev2", 00:11:39.194 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:39.194 "is_configured": true, 00:11:39.194 "data_offset": 0, 00:11:39.194 "data_size": 65536 00:11:39.194 }, 00:11:39.194 { 00:11:39.194 "name": "BaseBdev3", 00:11:39.194 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:39.194 "is_configured": true, 00:11:39.194 "data_offset": 0, 00:11:39.194 "data_size": 65536 00:11:39.194 } 00:11:39.194 ] 00:11:39.194 }' 00:11:39.194 10:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.194 10:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:39.760 [2024-07-15 10:19:04.514428] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:39.760 "name": "Existed_Raid", 00:11:39.760 "aliases": [ 00:11:39.760 "6a982f6f-0b45-44ca-81cb-572a41c69a60" 00:11:39.760 ], 00:11:39.760 "product_name": "Raid Volume", 00:11:39.760 "block_size": 512, 00:11:39.760 "num_blocks": 196608, 00:11:39.760 "uuid": "6a982f6f-0b45-44ca-81cb-572a41c69a60", 00:11:39.760 "assigned_rate_limits": { 00:11:39.760 "rw_ios_per_sec": 0, 00:11:39.760 "rw_mbytes_per_sec": 0, 00:11:39.760 "r_mbytes_per_sec": 0, 00:11:39.760 "w_mbytes_per_sec": 0 00:11:39.760 }, 00:11:39.760 "claimed": false, 00:11:39.760 "zoned": false, 00:11:39.760 "supported_io_types": { 00:11:39.760 "read": true, 00:11:39.760 "write": true, 00:11:39.760 "unmap": true, 00:11:39.760 "flush": true, 00:11:39.760 "reset": true, 00:11:39.760 "nvme_admin": false, 00:11:39.760 "nvme_io": false, 00:11:39.760 "nvme_io_md": false, 00:11:39.760 "write_zeroes": true, 00:11:39.760 "zcopy": false, 00:11:39.760 "get_zone_info": false, 00:11:39.760 "zone_management": false, 00:11:39.760 "zone_append": false, 00:11:39.760 "compare": false, 00:11:39.760 "compare_and_write": false, 00:11:39.760 "abort": false, 00:11:39.760 "seek_hole": false, 00:11:39.760 "seek_data": false, 00:11:39.760 "copy": false, 00:11:39.760 "nvme_iov_md": false 00:11:39.760 }, 00:11:39.760 "memory_domains": [ 00:11:39.760 { 00:11:39.760 "dma_device_id": "system", 00:11:39.760 "dma_device_type": 1 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.760 "dma_device_type": 2 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "dma_device_id": "system", 00:11:39.760 "dma_device_type": 1 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.760 "dma_device_type": 2 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "dma_device_id": "system", 00:11:39.760 "dma_device_type": 1 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.760 "dma_device_type": 2 00:11:39.760 } 00:11:39.760 ], 00:11:39.760 "driver_specific": { 00:11:39.760 "raid": { 00:11:39.760 "uuid": "6a982f6f-0b45-44ca-81cb-572a41c69a60", 00:11:39.760 "strip_size_kb": 64, 00:11:39.760 "state": "online", 00:11:39.760 "raid_level": "raid0", 00:11:39.760 "superblock": false, 00:11:39.760 "num_base_bdevs": 3, 00:11:39.760 "num_base_bdevs_discovered": 3, 00:11:39.760 "num_base_bdevs_operational": 3, 00:11:39.760 "base_bdevs_list": [ 00:11:39.760 { 00:11:39.760 "name": "NewBaseBdev", 00:11:39.760 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:39.760 "is_configured": true, 00:11:39.760 "data_offset": 0, 00:11:39.760 "data_size": 65536 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "name": "BaseBdev2", 00:11:39.760 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:39.760 "is_configured": true, 00:11:39.760 "data_offset": 0, 00:11:39.760 "data_size": 65536 00:11:39.760 }, 00:11:39.760 { 00:11:39.760 "name": "BaseBdev3", 00:11:39.760 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:39.760 "is_configured": true, 00:11:39.760 "data_offset": 0, 00:11:39.760 "data_size": 65536 00:11:39.760 } 00:11:39.760 ] 00:11:39.760 } 00:11:39.760 } 00:11:39.760 }' 00:11:39.760 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:40.018 BaseBdev2 00:11:40.018 BaseBdev3' 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.018 "name": "NewBaseBdev", 00:11:40.018 "aliases": [ 00:11:40.018 "7160d54d-0f32-48e9-955e-1bc997e58274" 00:11:40.018 ], 00:11:40.018 "product_name": "Malloc disk", 00:11:40.018 "block_size": 512, 00:11:40.018 "num_blocks": 65536, 00:11:40.018 "uuid": "7160d54d-0f32-48e9-955e-1bc997e58274", 00:11:40.018 "assigned_rate_limits": { 00:11:40.018 "rw_ios_per_sec": 0, 00:11:40.018 "rw_mbytes_per_sec": 0, 00:11:40.018 "r_mbytes_per_sec": 0, 00:11:40.018 "w_mbytes_per_sec": 0 00:11:40.018 }, 00:11:40.018 "claimed": true, 00:11:40.018 "claim_type": "exclusive_write", 00:11:40.018 "zoned": false, 00:11:40.018 "supported_io_types": { 00:11:40.018 "read": true, 00:11:40.018 "write": true, 00:11:40.018 "unmap": true, 00:11:40.018 "flush": true, 00:11:40.018 "reset": true, 00:11:40.018 "nvme_admin": false, 00:11:40.018 "nvme_io": false, 00:11:40.018 "nvme_io_md": false, 00:11:40.018 "write_zeroes": true, 00:11:40.018 "zcopy": true, 00:11:40.018 "get_zone_info": false, 00:11:40.018 "zone_management": false, 00:11:40.018 "zone_append": false, 00:11:40.018 "compare": false, 00:11:40.018 "compare_and_write": false, 00:11:40.018 "abort": true, 00:11:40.018 "seek_hole": false, 00:11:40.018 "seek_data": false, 00:11:40.018 "copy": true, 00:11:40.018 "nvme_iov_md": false 00:11:40.018 }, 00:11:40.018 "memory_domains": [ 00:11:40.018 { 00:11:40.018 "dma_device_id": "system", 00:11:40.018 "dma_device_type": 1 00:11:40.018 }, 00:11:40.018 { 00:11:40.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.018 "dma_device_type": 2 00:11:40.018 } 00:11:40.018 ], 00:11:40.018 "driver_specific": {} 00:11:40.018 }' 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.018 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.276 10:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.276 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:40.276 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.276 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:40.276 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.534 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.534 "name": "BaseBdev2", 00:11:40.534 "aliases": [ 00:11:40.534 "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79" 00:11:40.534 ], 00:11:40.534 "product_name": "Malloc disk", 00:11:40.534 "block_size": 512, 00:11:40.534 "num_blocks": 65536, 00:11:40.534 "uuid": "5cfdc0bf-800d-4778-8ad6-273ea7ff8e79", 00:11:40.534 "assigned_rate_limits": { 00:11:40.534 "rw_ios_per_sec": 0, 00:11:40.534 "rw_mbytes_per_sec": 0, 00:11:40.534 "r_mbytes_per_sec": 0, 00:11:40.534 "w_mbytes_per_sec": 0 00:11:40.534 }, 00:11:40.534 "claimed": true, 00:11:40.534 "claim_type": "exclusive_write", 00:11:40.534 "zoned": false, 00:11:40.534 "supported_io_types": { 00:11:40.534 "read": true, 00:11:40.534 "write": true, 00:11:40.534 "unmap": true, 00:11:40.534 "flush": true, 00:11:40.534 "reset": true, 00:11:40.534 "nvme_admin": false, 00:11:40.534 "nvme_io": false, 00:11:40.534 "nvme_io_md": false, 00:11:40.534 "write_zeroes": true, 00:11:40.534 "zcopy": true, 00:11:40.534 "get_zone_info": false, 00:11:40.534 "zone_management": false, 00:11:40.534 "zone_append": false, 00:11:40.534 "compare": false, 00:11:40.534 "compare_and_write": false, 00:11:40.534 "abort": true, 00:11:40.534 "seek_hole": false, 00:11:40.534 "seek_data": false, 00:11:40.534 "copy": true, 00:11:40.534 "nvme_iov_md": false 00:11:40.534 }, 00:11:40.534 "memory_domains": [ 00:11:40.534 { 00:11:40.534 "dma_device_id": "system", 00:11:40.534 "dma_device_type": 1 00:11:40.534 }, 00:11:40.534 { 00:11:40.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.534 "dma_device_type": 2 00:11:40.534 } 00:11:40.534 ], 00:11:40.534 "driver_specific": {} 00:11:40.534 }' 00:11:40.534 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.534 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.534 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.535 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.535 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.535 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.535 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:40.792 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:41.050 "name": "BaseBdev3", 00:11:41.050 "aliases": [ 00:11:41.050 "96e730b9-d702-4a8a-8d72-01927961472c" 00:11:41.050 ], 00:11:41.050 "product_name": "Malloc disk", 00:11:41.050 "block_size": 512, 00:11:41.050 "num_blocks": 65536, 00:11:41.050 "uuid": "96e730b9-d702-4a8a-8d72-01927961472c", 00:11:41.050 "assigned_rate_limits": { 00:11:41.050 "rw_ios_per_sec": 0, 00:11:41.050 "rw_mbytes_per_sec": 0, 00:11:41.050 "r_mbytes_per_sec": 0, 00:11:41.050 "w_mbytes_per_sec": 0 00:11:41.050 }, 00:11:41.050 "claimed": true, 00:11:41.050 "claim_type": "exclusive_write", 00:11:41.050 "zoned": false, 00:11:41.050 "supported_io_types": { 00:11:41.050 "read": true, 00:11:41.050 "write": true, 00:11:41.050 "unmap": true, 00:11:41.050 "flush": true, 00:11:41.050 "reset": true, 00:11:41.050 "nvme_admin": false, 00:11:41.050 "nvme_io": false, 00:11:41.050 "nvme_io_md": false, 00:11:41.050 "write_zeroes": true, 00:11:41.050 "zcopy": true, 00:11:41.050 "get_zone_info": false, 00:11:41.050 "zone_management": false, 00:11:41.050 "zone_append": false, 00:11:41.050 "compare": false, 00:11:41.050 "compare_and_write": false, 00:11:41.050 "abort": true, 00:11:41.050 "seek_hole": false, 00:11:41.050 "seek_data": false, 00:11:41.050 "copy": true, 00:11:41.050 "nvme_iov_md": false 00:11:41.050 }, 00:11:41.050 "memory_domains": [ 00:11:41.050 { 00:11:41.050 "dma_device_id": "system", 00:11:41.050 "dma_device_type": 1 00:11:41.050 }, 00:11:41.050 { 00:11:41.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.050 "dma_device_type": 2 00:11:41.050 } 00:11:41.050 ], 00:11:41.050 "driver_specific": {} 00:11:41.050 }' 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.050 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.309 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:41.309 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.309 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.309 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.309 10:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:41.309 [2024-07-15 10:19:06.074232] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.309 [2024-07-15 10:19:06.074253] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.309 [2024-07-15 10:19:06.074288] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.309 [2024-07-15 10:19:06.074322] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.309 [2024-07-15 10:19:06.074329] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d20a60 name Existed_Raid, state offline 00:11:41.309 10:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1764000 00:11:41.309 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1764000 ']' 00:11:41.309 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1764000 00:11:41.309 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1764000 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1764000' 00:11:41.568 killing process with pid 1764000 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1764000 00:11:41.568 [2024-07-15 10:19:06.138157] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1764000 00:11:41.568 [2024-07-15 10:19:06.159810] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:41.568 00:11:41.568 real 0m21.143s 00:11:41.568 user 0m38.505s 00:11:41.568 sys 0m4.110s 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.568 10:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.568 ************************************ 00:11:41.568 END TEST raid_state_function_test 00:11:41.568 ************************************ 00:11:41.827 10:19:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:41.828 10:19:06 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:41.828 10:19:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:41.828 10:19:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.828 10:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.828 ************************************ 00:11:41.828 START TEST raid_state_function_test_sb 00:11:41.828 ************************************ 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1768306 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1768306' 00:11:41.828 Process raid pid: 1768306 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1768306 /var/tmp/spdk-raid.sock 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1768306 ']' 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:41.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.828 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.828 [2024-07-15 10:19:06.451361] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:41.828 [2024-07-15 10:19:06.451401] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:41.828 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.828 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:41.828 [2024-07-15 10:19:06.538830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.828 [2024-07-15 10:19:06.613096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.087 [2024-07-15 10:19:06.667270] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.087 [2024-07-15 10:19:06.667297] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.655 10:19:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.655 10:19:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:42.656 [2024-07-15 10:19:07.398324] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.656 [2024-07-15 10:19:07.398356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.656 [2024-07-15 10:19:07.398364] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.656 [2024-07-15 10:19:07.398372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.656 [2024-07-15 10:19:07.398377] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.656 [2024-07-15 10:19:07.398384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.656 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.984 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.984 "name": "Existed_Raid", 00:11:42.984 "uuid": "df8d7f10-efb3-45f5-907d-8cdea974d1ed", 00:11:42.984 "strip_size_kb": 64, 00:11:42.984 "state": "configuring", 00:11:42.984 "raid_level": "raid0", 00:11:42.984 "superblock": true, 00:11:42.984 "num_base_bdevs": 3, 00:11:42.984 "num_base_bdevs_discovered": 0, 00:11:42.984 "num_base_bdevs_operational": 3, 00:11:42.984 "base_bdevs_list": [ 00:11:42.984 { 00:11:42.984 "name": "BaseBdev1", 00:11:42.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.984 "is_configured": false, 00:11:42.984 "data_offset": 0, 00:11:42.984 "data_size": 0 00:11:42.984 }, 00:11:42.984 { 00:11:42.984 "name": "BaseBdev2", 00:11:42.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.984 "is_configured": false, 00:11:42.984 "data_offset": 0, 00:11:42.984 "data_size": 0 00:11:42.984 }, 00:11:42.984 { 00:11:42.984 "name": "BaseBdev3", 00:11:42.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.984 "is_configured": false, 00:11:42.984 "data_offset": 0, 00:11:42.984 "data_size": 0 00:11:42.984 } 00:11:42.984 ] 00:11:42.984 }' 00:11:42.984 10:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.984 10:19:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.551 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:43.551 [2024-07-15 10:19:08.216316] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.551 [2024-07-15 10:19:08.216336] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe41f40 name Existed_Raid, state configuring 00:11:43.551 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:43.810 [2024-07-15 10:19:08.388779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.810 [2024-07-15 10:19:08.388799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.810 [2024-07-15 10:19:08.388805] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.810 [2024-07-15 10:19:08.388812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.810 [2024-07-15 10:19:08.388817] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.810 [2024-07-15 10:19:08.388824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.810 [2024-07-15 10:19:08.557531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.810 BaseBdev1 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:43.810 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:44.068 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:44.326 [ 00:11:44.326 { 00:11:44.326 "name": "BaseBdev1", 00:11:44.326 "aliases": [ 00:11:44.326 "a4f17fc6-5d7b-436b-86d3-1af65c77b124" 00:11:44.326 ], 00:11:44.326 "product_name": "Malloc disk", 00:11:44.326 "block_size": 512, 00:11:44.326 "num_blocks": 65536, 00:11:44.326 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:44.326 "assigned_rate_limits": { 00:11:44.326 "rw_ios_per_sec": 0, 00:11:44.326 "rw_mbytes_per_sec": 0, 00:11:44.326 "r_mbytes_per_sec": 0, 00:11:44.326 "w_mbytes_per_sec": 0 00:11:44.326 }, 00:11:44.326 "claimed": true, 00:11:44.326 "claim_type": "exclusive_write", 00:11:44.326 "zoned": false, 00:11:44.326 "supported_io_types": { 00:11:44.326 "read": true, 00:11:44.326 "write": true, 00:11:44.326 "unmap": true, 00:11:44.326 "flush": true, 00:11:44.326 "reset": true, 00:11:44.326 "nvme_admin": false, 00:11:44.326 "nvme_io": false, 00:11:44.326 "nvme_io_md": false, 00:11:44.326 "write_zeroes": true, 00:11:44.326 "zcopy": true, 00:11:44.326 "get_zone_info": false, 00:11:44.326 "zone_management": false, 00:11:44.326 "zone_append": false, 00:11:44.326 "compare": false, 00:11:44.326 "compare_and_write": false, 00:11:44.326 "abort": true, 00:11:44.326 "seek_hole": false, 00:11:44.326 "seek_data": false, 00:11:44.326 "copy": true, 00:11:44.326 "nvme_iov_md": false 00:11:44.326 }, 00:11:44.326 "memory_domains": [ 00:11:44.326 { 00:11:44.326 "dma_device_id": "system", 00:11:44.326 "dma_device_type": 1 00:11:44.326 }, 00:11:44.326 { 00:11:44.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.326 "dma_device_type": 2 00:11:44.326 } 00:11:44.326 ], 00:11:44.326 "driver_specific": {} 00:11:44.326 } 00:11:44.326 ] 00:11:44.326 10:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:44.326 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.327 10:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.327 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:44.327 "name": "Existed_Raid", 00:11:44.327 "uuid": "3bb38ce5-5090-4020-848a-9076bc999ab0", 00:11:44.327 "strip_size_kb": 64, 00:11:44.327 "state": "configuring", 00:11:44.327 "raid_level": "raid0", 00:11:44.327 "superblock": true, 00:11:44.327 "num_base_bdevs": 3, 00:11:44.327 "num_base_bdevs_discovered": 1, 00:11:44.327 "num_base_bdevs_operational": 3, 00:11:44.327 "base_bdevs_list": [ 00:11:44.327 { 00:11:44.327 "name": "BaseBdev1", 00:11:44.327 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:44.327 "is_configured": true, 00:11:44.327 "data_offset": 2048, 00:11:44.327 "data_size": 63488 00:11:44.327 }, 00:11:44.327 { 00:11:44.327 "name": "BaseBdev2", 00:11:44.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.327 "is_configured": false, 00:11:44.327 "data_offset": 0, 00:11:44.327 "data_size": 0 00:11:44.327 }, 00:11:44.327 { 00:11:44.327 "name": "BaseBdev3", 00:11:44.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.327 "is_configured": false, 00:11:44.327 "data_offset": 0, 00:11:44.327 "data_size": 0 00:11:44.327 } 00:11:44.327 ] 00:11:44.327 }' 00:11:44.327 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:44.327 10:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.893 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:45.152 [2024-07-15 10:19:09.748588] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.152 [2024-07-15 10:19:09.748616] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe41810 name Existed_Raid, state configuring 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:45.152 [2024-07-15 10:19:09.913058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.152 [2024-07-15 10:19:09.914125] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.152 [2024-07-15 10:19:09.914154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.152 [2024-07-15 10:19:09.914161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.152 [2024-07-15 10:19:09.914168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.152 10:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.410 10:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.410 "name": "Existed_Raid", 00:11:45.410 "uuid": "7263e86a-e78f-47ae-a819-9f6a3b75959d", 00:11:45.410 "strip_size_kb": 64, 00:11:45.410 "state": "configuring", 00:11:45.410 "raid_level": "raid0", 00:11:45.410 "superblock": true, 00:11:45.410 "num_base_bdevs": 3, 00:11:45.410 "num_base_bdevs_discovered": 1, 00:11:45.410 "num_base_bdevs_operational": 3, 00:11:45.410 "base_bdevs_list": [ 00:11:45.410 { 00:11:45.410 "name": "BaseBdev1", 00:11:45.410 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:45.410 "is_configured": true, 00:11:45.410 "data_offset": 2048, 00:11:45.410 "data_size": 63488 00:11:45.410 }, 00:11:45.410 { 00:11:45.410 "name": "BaseBdev2", 00:11:45.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.410 "is_configured": false, 00:11:45.410 "data_offset": 0, 00:11:45.410 "data_size": 0 00:11:45.410 }, 00:11:45.410 { 00:11:45.410 "name": "BaseBdev3", 00:11:45.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.410 "is_configured": false, 00:11:45.410 "data_offset": 0, 00:11:45.410 "data_size": 0 00:11:45.410 } 00:11:45.410 ] 00:11:45.410 }' 00:11:45.410 10:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.410 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.976 10:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.235 [2024-07-15 10:19:10.766098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.235 BaseBdev2 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.235 10:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.492 [ 00:11:46.492 { 00:11:46.492 "name": "BaseBdev2", 00:11:46.492 "aliases": [ 00:11:46.492 "09901080-74b8-4637-9ade-167cbfdf28cb" 00:11:46.492 ], 00:11:46.492 "product_name": "Malloc disk", 00:11:46.492 "block_size": 512, 00:11:46.492 "num_blocks": 65536, 00:11:46.492 "uuid": "09901080-74b8-4637-9ade-167cbfdf28cb", 00:11:46.492 "assigned_rate_limits": { 00:11:46.492 "rw_ios_per_sec": 0, 00:11:46.492 "rw_mbytes_per_sec": 0, 00:11:46.492 "r_mbytes_per_sec": 0, 00:11:46.492 "w_mbytes_per_sec": 0 00:11:46.492 }, 00:11:46.492 "claimed": true, 00:11:46.492 "claim_type": "exclusive_write", 00:11:46.492 "zoned": false, 00:11:46.492 "supported_io_types": { 00:11:46.492 "read": true, 00:11:46.492 "write": true, 00:11:46.492 "unmap": true, 00:11:46.492 "flush": true, 00:11:46.492 "reset": true, 00:11:46.492 "nvme_admin": false, 00:11:46.492 "nvme_io": false, 00:11:46.492 "nvme_io_md": false, 00:11:46.492 "write_zeroes": true, 00:11:46.492 "zcopy": true, 00:11:46.492 "get_zone_info": false, 00:11:46.492 "zone_management": false, 00:11:46.492 "zone_append": false, 00:11:46.492 "compare": false, 00:11:46.492 "compare_and_write": false, 00:11:46.492 "abort": true, 00:11:46.492 "seek_hole": false, 00:11:46.492 "seek_data": false, 00:11:46.492 "copy": true, 00:11:46.492 "nvme_iov_md": false 00:11:46.492 }, 00:11:46.492 "memory_domains": [ 00:11:46.492 { 00:11:46.492 "dma_device_id": "system", 00:11:46.492 "dma_device_type": 1 00:11:46.492 }, 00:11:46.492 { 00:11:46.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.493 "dma_device_type": 2 00:11:46.493 } 00:11:46.493 ], 00:11:46.493 "driver_specific": {} 00:11:46.493 } 00:11:46.493 ] 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.493 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.750 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.750 "name": "Existed_Raid", 00:11:46.750 "uuid": "7263e86a-e78f-47ae-a819-9f6a3b75959d", 00:11:46.750 "strip_size_kb": 64, 00:11:46.750 "state": "configuring", 00:11:46.750 "raid_level": "raid0", 00:11:46.750 "superblock": true, 00:11:46.750 "num_base_bdevs": 3, 00:11:46.750 "num_base_bdevs_discovered": 2, 00:11:46.750 "num_base_bdevs_operational": 3, 00:11:46.750 "base_bdevs_list": [ 00:11:46.750 { 00:11:46.750 "name": "BaseBdev1", 00:11:46.750 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:46.750 "is_configured": true, 00:11:46.750 "data_offset": 2048, 00:11:46.750 "data_size": 63488 00:11:46.750 }, 00:11:46.750 { 00:11:46.750 "name": "BaseBdev2", 00:11:46.750 "uuid": "09901080-74b8-4637-9ade-167cbfdf28cb", 00:11:46.750 "is_configured": true, 00:11:46.750 "data_offset": 2048, 00:11:46.750 "data_size": 63488 00:11:46.750 }, 00:11:46.750 { 00:11:46.750 "name": "BaseBdev3", 00:11:46.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.750 "is_configured": false, 00:11:46.750 "data_offset": 0, 00:11:46.750 "data_size": 0 00:11:46.750 } 00:11:46.750 ] 00:11:46.750 }' 00:11:46.750 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.750 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.007 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.265 [2024-07-15 10:19:11.923780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.265 [2024-07-15 10:19:11.923894] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe42700 00:11:47.265 [2024-07-15 10:19:11.923913] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:47.265 [2024-07-15 10:19:11.924036] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe423d0 00:11:47.265 [2024-07-15 10:19:11.924121] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe42700 00:11:47.265 [2024-07-15 10:19:11.924127] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe42700 00:11:47.265 [2024-07-15 10:19:11.924189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.265 BaseBdev3 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:47.265 10:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:47.524 10:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.524 [ 00:11:47.524 { 00:11:47.524 "name": "BaseBdev3", 00:11:47.524 "aliases": [ 00:11:47.524 "d59ad34b-f533-40f2-8532-9bd716d36ec2" 00:11:47.524 ], 00:11:47.524 "product_name": "Malloc disk", 00:11:47.524 "block_size": 512, 00:11:47.524 "num_blocks": 65536, 00:11:47.524 "uuid": "d59ad34b-f533-40f2-8532-9bd716d36ec2", 00:11:47.524 "assigned_rate_limits": { 00:11:47.524 "rw_ios_per_sec": 0, 00:11:47.524 "rw_mbytes_per_sec": 0, 00:11:47.524 "r_mbytes_per_sec": 0, 00:11:47.524 "w_mbytes_per_sec": 0 00:11:47.524 }, 00:11:47.524 "claimed": true, 00:11:47.524 "claim_type": "exclusive_write", 00:11:47.524 "zoned": false, 00:11:47.524 "supported_io_types": { 00:11:47.524 "read": true, 00:11:47.524 "write": true, 00:11:47.524 "unmap": true, 00:11:47.524 "flush": true, 00:11:47.524 "reset": true, 00:11:47.524 "nvme_admin": false, 00:11:47.524 "nvme_io": false, 00:11:47.524 "nvme_io_md": false, 00:11:47.524 "write_zeroes": true, 00:11:47.524 "zcopy": true, 00:11:47.524 "get_zone_info": false, 00:11:47.524 "zone_management": false, 00:11:47.524 "zone_append": false, 00:11:47.524 "compare": false, 00:11:47.524 "compare_and_write": false, 00:11:47.524 "abort": true, 00:11:47.524 "seek_hole": false, 00:11:47.524 "seek_data": false, 00:11:47.524 "copy": true, 00:11:47.524 "nvme_iov_md": false 00:11:47.524 }, 00:11:47.524 "memory_domains": [ 00:11:47.524 { 00:11:47.524 "dma_device_id": "system", 00:11:47.524 "dma_device_type": 1 00:11:47.524 }, 00:11:47.524 { 00:11:47.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.524 "dma_device_type": 2 00:11:47.524 } 00:11:47.524 ], 00:11:47.525 "driver_specific": {} 00:11:47.525 } 00:11:47.525 ] 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.525 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.783 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.783 "name": "Existed_Raid", 00:11:47.783 "uuid": "7263e86a-e78f-47ae-a819-9f6a3b75959d", 00:11:47.783 "strip_size_kb": 64, 00:11:47.783 "state": "online", 00:11:47.783 "raid_level": "raid0", 00:11:47.783 "superblock": true, 00:11:47.783 "num_base_bdevs": 3, 00:11:47.783 "num_base_bdevs_discovered": 3, 00:11:47.783 "num_base_bdevs_operational": 3, 00:11:47.783 "base_bdevs_list": [ 00:11:47.783 { 00:11:47.783 "name": "BaseBdev1", 00:11:47.783 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:47.783 "is_configured": true, 00:11:47.783 "data_offset": 2048, 00:11:47.783 "data_size": 63488 00:11:47.783 }, 00:11:47.783 { 00:11:47.783 "name": "BaseBdev2", 00:11:47.783 "uuid": "09901080-74b8-4637-9ade-167cbfdf28cb", 00:11:47.783 "is_configured": true, 00:11:47.783 "data_offset": 2048, 00:11:47.783 "data_size": 63488 00:11:47.783 }, 00:11:47.783 { 00:11:47.783 "name": "BaseBdev3", 00:11:47.783 "uuid": "d59ad34b-f533-40f2-8532-9bd716d36ec2", 00:11:47.783 "is_configured": true, 00:11:47.783 "data_offset": 2048, 00:11:47.783 "data_size": 63488 00:11:47.783 } 00:11:47.783 ] 00:11:47.783 }' 00:11:47.783 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.783 10:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:48.350 10:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:48.350 [2024-07-15 10:19:13.086956] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.350 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:48.350 "name": "Existed_Raid", 00:11:48.350 "aliases": [ 00:11:48.350 "7263e86a-e78f-47ae-a819-9f6a3b75959d" 00:11:48.350 ], 00:11:48.350 "product_name": "Raid Volume", 00:11:48.350 "block_size": 512, 00:11:48.350 "num_blocks": 190464, 00:11:48.350 "uuid": "7263e86a-e78f-47ae-a819-9f6a3b75959d", 00:11:48.350 "assigned_rate_limits": { 00:11:48.350 "rw_ios_per_sec": 0, 00:11:48.350 "rw_mbytes_per_sec": 0, 00:11:48.350 "r_mbytes_per_sec": 0, 00:11:48.350 "w_mbytes_per_sec": 0 00:11:48.350 }, 00:11:48.350 "claimed": false, 00:11:48.350 "zoned": false, 00:11:48.350 "supported_io_types": { 00:11:48.350 "read": true, 00:11:48.350 "write": true, 00:11:48.350 "unmap": true, 00:11:48.350 "flush": true, 00:11:48.350 "reset": true, 00:11:48.350 "nvme_admin": false, 00:11:48.350 "nvme_io": false, 00:11:48.350 "nvme_io_md": false, 00:11:48.350 "write_zeroes": true, 00:11:48.350 "zcopy": false, 00:11:48.350 "get_zone_info": false, 00:11:48.350 "zone_management": false, 00:11:48.350 "zone_append": false, 00:11:48.350 "compare": false, 00:11:48.350 "compare_and_write": false, 00:11:48.350 "abort": false, 00:11:48.350 "seek_hole": false, 00:11:48.350 "seek_data": false, 00:11:48.350 "copy": false, 00:11:48.350 "nvme_iov_md": false 00:11:48.350 }, 00:11:48.350 "memory_domains": [ 00:11:48.350 { 00:11:48.350 "dma_device_id": "system", 00:11:48.350 "dma_device_type": 1 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.350 "dma_device_type": 2 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "dma_device_id": "system", 00:11:48.350 "dma_device_type": 1 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.350 "dma_device_type": 2 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "dma_device_id": "system", 00:11:48.350 "dma_device_type": 1 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.350 "dma_device_type": 2 00:11:48.350 } 00:11:48.350 ], 00:11:48.350 "driver_specific": { 00:11:48.350 "raid": { 00:11:48.350 "uuid": "7263e86a-e78f-47ae-a819-9f6a3b75959d", 00:11:48.350 "strip_size_kb": 64, 00:11:48.350 "state": "online", 00:11:48.350 "raid_level": "raid0", 00:11:48.350 "superblock": true, 00:11:48.350 "num_base_bdevs": 3, 00:11:48.350 "num_base_bdevs_discovered": 3, 00:11:48.350 "num_base_bdevs_operational": 3, 00:11:48.350 "base_bdevs_list": [ 00:11:48.350 { 00:11:48.350 "name": "BaseBdev1", 00:11:48.350 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "name": "BaseBdev2", 00:11:48.350 "uuid": "09901080-74b8-4637-9ade-167cbfdf28cb", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 }, 00:11:48.350 { 00:11:48.350 "name": "BaseBdev3", 00:11:48.350 "uuid": "d59ad34b-f533-40f2-8532-9bd716d36ec2", 00:11:48.350 "is_configured": true, 00:11:48.350 "data_offset": 2048, 00:11:48.350 "data_size": 63488 00:11:48.350 } 00:11:48.350 ] 00:11:48.350 } 00:11:48.350 } 00:11:48.350 }' 00:11:48.350 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:48.611 BaseBdev2 00:11:48.611 BaseBdev3' 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:48.611 "name": "BaseBdev1", 00:11:48.611 "aliases": [ 00:11:48.611 "a4f17fc6-5d7b-436b-86d3-1af65c77b124" 00:11:48.611 ], 00:11:48.611 "product_name": "Malloc disk", 00:11:48.611 "block_size": 512, 00:11:48.611 "num_blocks": 65536, 00:11:48.611 "uuid": "a4f17fc6-5d7b-436b-86d3-1af65c77b124", 00:11:48.611 "assigned_rate_limits": { 00:11:48.611 "rw_ios_per_sec": 0, 00:11:48.611 "rw_mbytes_per_sec": 0, 00:11:48.611 "r_mbytes_per_sec": 0, 00:11:48.611 "w_mbytes_per_sec": 0 00:11:48.611 }, 00:11:48.611 "claimed": true, 00:11:48.611 "claim_type": "exclusive_write", 00:11:48.611 "zoned": false, 00:11:48.611 "supported_io_types": { 00:11:48.611 "read": true, 00:11:48.611 "write": true, 00:11:48.611 "unmap": true, 00:11:48.611 "flush": true, 00:11:48.611 "reset": true, 00:11:48.611 "nvme_admin": false, 00:11:48.611 "nvme_io": false, 00:11:48.611 "nvme_io_md": false, 00:11:48.611 "write_zeroes": true, 00:11:48.611 "zcopy": true, 00:11:48.611 "get_zone_info": false, 00:11:48.611 "zone_management": false, 00:11:48.611 "zone_append": false, 00:11:48.611 "compare": false, 00:11:48.611 "compare_and_write": false, 00:11:48.611 "abort": true, 00:11:48.611 "seek_hole": false, 00:11:48.611 "seek_data": false, 00:11:48.611 "copy": true, 00:11:48.611 "nvme_iov_md": false 00:11:48.611 }, 00:11:48.611 "memory_domains": [ 00:11:48.611 { 00:11:48.611 "dma_device_id": "system", 00:11:48.611 "dma_device_type": 1 00:11:48.611 }, 00:11:48.611 { 00:11:48.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.611 "dma_device_type": 2 00:11:48.611 } 00:11:48.611 ], 00:11:48.611 "driver_specific": {} 00:11:48.611 }' 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:48.611 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:48.869 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:49.127 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:49.127 "name": "BaseBdev2", 00:11:49.127 "aliases": [ 00:11:49.127 "09901080-74b8-4637-9ade-167cbfdf28cb" 00:11:49.127 ], 00:11:49.127 "product_name": "Malloc disk", 00:11:49.127 "block_size": 512, 00:11:49.127 "num_blocks": 65536, 00:11:49.127 "uuid": "09901080-74b8-4637-9ade-167cbfdf28cb", 00:11:49.127 "assigned_rate_limits": { 00:11:49.127 "rw_ios_per_sec": 0, 00:11:49.127 "rw_mbytes_per_sec": 0, 00:11:49.127 "r_mbytes_per_sec": 0, 00:11:49.127 "w_mbytes_per_sec": 0 00:11:49.127 }, 00:11:49.127 "claimed": true, 00:11:49.127 "claim_type": "exclusive_write", 00:11:49.127 "zoned": false, 00:11:49.127 "supported_io_types": { 00:11:49.127 "read": true, 00:11:49.127 "write": true, 00:11:49.127 "unmap": true, 00:11:49.127 "flush": true, 00:11:49.127 "reset": true, 00:11:49.127 "nvme_admin": false, 00:11:49.127 "nvme_io": false, 00:11:49.127 "nvme_io_md": false, 00:11:49.127 "write_zeroes": true, 00:11:49.127 "zcopy": true, 00:11:49.127 "get_zone_info": false, 00:11:49.127 "zone_management": false, 00:11:49.127 "zone_append": false, 00:11:49.127 "compare": false, 00:11:49.127 "compare_and_write": false, 00:11:49.127 "abort": true, 00:11:49.127 "seek_hole": false, 00:11:49.127 "seek_data": false, 00:11:49.127 "copy": true, 00:11:49.127 "nvme_iov_md": false 00:11:49.127 }, 00:11:49.127 "memory_domains": [ 00:11:49.127 { 00:11:49.127 "dma_device_id": "system", 00:11:49.127 "dma_device_type": 1 00:11:49.127 }, 00:11:49.127 { 00:11:49.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.127 "dma_device_type": 2 00:11:49.127 } 00:11:49.127 ], 00:11:49.127 "driver_specific": {} 00:11:49.127 }' 00:11:49.127 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:49.127 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:49.127 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:49.127 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:49.127 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:49.385 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:49.385 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:49.385 10:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:49.385 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:49.668 "name": "BaseBdev3", 00:11:49.668 "aliases": [ 00:11:49.668 "d59ad34b-f533-40f2-8532-9bd716d36ec2" 00:11:49.668 ], 00:11:49.668 "product_name": "Malloc disk", 00:11:49.668 "block_size": 512, 00:11:49.668 "num_blocks": 65536, 00:11:49.668 "uuid": "d59ad34b-f533-40f2-8532-9bd716d36ec2", 00:11:49.668 "assigned_rate_limits": { 00:11:49.668 "rw_ios_per_sec": 0, 00:11:49.668 "rw_mbytes_per_sec": 0, 00:11:49.668 "r_mbytes_per_sec": 0, 00:11:49.668 "w_mbytes_per_sec": 0 00:11:49.668 }, 00:11:49.668 "claimed": true, 00:11:49.668 "claim_type": "exclusive_write", 00:11:49.668 "zoned": false, 00:11:49.668 "supported_io_types": { 00:11:49.668 "read": true, 00:11:49.668 "write": true, 00:11:49.668 "unmap": true, 00:11:49.668 "flush": true, 00:11:49.668 "reset": true, 00:11:49.668 "nvme_admin": false, 00:11:49.668 "nvme_io": false, 00:11:49.668 "nvme_io_md": false, 00:11:49.668 "write_zeroes": true, 00:11:49.668 "zcopy": true, 00:11:49.668 "get_zone_info": false, 00:11:49.668 "zone_management": false, 00:11:49.668 "zone_append": false, 00:11:49.668 "compare": false, 00:11:49.668 "compare_and_write": false, 00:11:49.668 "abort": true, 00:11:49.668 "seek_hole": false, 00:11:49.668 "seek_data": false, 00:11:49.668 "copy": true, 00:11:49.668 "nvme_iov_md": false 00:11:49.668 }, 00:11:49.668 "memory_domains": [ 00:11:49.668 { 00:11:49.668 "dma_device_id": "system", 00:11:49.668 "dma_device_type": 1 00:11:49.668 }, 00:11:49.668 { 00:11:49.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.668 "dma_device_type": 2 00:11:49.668 } 00:11:49.668 ], 00:11:49.668 "driver_specific": {} 00:11:49.668 }' 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:49.668 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:49.926 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:49.926 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:49.926 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:49.926 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:49.926 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:49.926 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:50.184 [2024-07-15 10:19:14.739305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.184 [2024-07-15 10:19:14.739324] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:50.184 [2024-07-15 10:19:14.739350] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.184 "name": "Existed_Raid", 00:11:50.184 "uuid": "7263e86a-e78f-47ae-a819-9f6a3b75959d", 00:11:50.184 "strip_size_kb": 64, 00:11:50.184 "state": "offline", 00:11:50.184 "raid_level": "raid0", 00:11:50.184 "superblock": true, 00:11:50.184 "num_base_bdevs": 3, 00:11:50.184 "num_base_bdevs_discovered": 2, 00:11:50.184 "num_base_bdevs_operational": 2, 00:11:50.184 "base_bdevs_list": [ 00:11:50.184 { 00:11:50.184 "name": null, 00:11:50.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.184 "is_configured": false, 00:11:50.184 "data_offset": 2048, 00:11:50.184 "data_size": 63488 00:11:50.184 }, 00:11:50.184 { 00:11:50.184 "name": "BaseBdev2", 00:11:50.184 "uuid": "09901080-74b8-4637-9ade-167cbfdf28cb", 00:11:50.184 "is_configured": true, 00:11:50.184 "data_offset": 2048, 00:11:50.184 "data_size": 63488 00:11:50.184 }, 00:11:50.184 { 00:11:50.184 "name": "BaseBdev3", 00:11:50.184 "uuid": "d59ad34b-f533-40f2-8532-9bd716d36ec2", 00:11:50.184 "is_configured": true, 00:11:50.184 "data_offset": 2048, 00:11:50.184 "data_size": 63488 00:11:50.184 } 00:11:50.184 ] 00:11:50.184 }' 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.184 10:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.750 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:50.750 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:50.750 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:50.750 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:51.008 [2024-07-15 10:19:15.770696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.008 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:51.265 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:51.265 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:51.265 10:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:51.522 [2024-07-15 10:19:16.125342] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:51.522 [2024-07-15 10:19:16.125369] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe42700 name Existed_Raid, state offline 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:51.523 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.779 BaseBdev2 00:11:51.779 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:51.779 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:51.779 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:51.780 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:51.780 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:51.780 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:51.780 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.037 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.037 [ 00:11:52.037 { 00:11:52.037 "name": "BaseBdev2", 00:11:52.037 "aliases": [ 00:11:52.037 "04b981b8-7019-42f7-bf26-73702710a4a2" 00:11:52.037 ], 00:11:52.037 "product_name": "Malloc disk", 00:11:52.037 "block_size": 512, 00:11:52.037 "num_blocks": 65536, 00:11:52.037 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:52.037 "assigned_rate_limits": { 00:11:52.037 "rw_ios_per_sec": 0, 00:11:52.037 "rw_mbytes_per_sec": 0, 00:11:52.037 "r_mbytes_per_sec": 0, 00:11:52.037 "w_mbytes_per_sec": 0 00:11:52.037 }, 00:11:52.037 "claimed": false, 00:11:52.037 "zoned": false, 00:11:52.037 "supported_io_types": { 00:11:52.037 "read": true, 00:11:52.037 "write": true, 00:11:52.037 "unmap": true, 00:11:52.037 "flush": true, 00:11:52.037 "reset": true, 00:11:52.037 "nvme_admin": false, 00:11:52.037 "nvme_io": false, 00:11:52.037 "nvme_io_md": false, 00:11:52.037 "write_zeroes": true, 00:11:52.037 "zcopy": true, 00:11:52.037 "get_zone_info": false, 00:11:52.037 "zone_management": false, 00:11:52.037 "zone_append": false, 00:11:52.037 "compare": false, 00:11:52.037 "compare_and_write": false, 00:11:52.037 "abort": true, 00:11:52.037 "seek_hole": false, 00:11:52.037 "seek_data": false, 00:11:52.037 "copy": true, 00:11:52.037 "nvme_iov_md": false 00:11:52.037 }, 00:11:52.037 "memory_domains": [ 00:11:52.037 { 00:11:52.037 "dma_device_id": "system", 00:11:52.037 "dma_device_type": 1 00:11:52.037 }, 00:11:52.037 { 00:11:52.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.037 "dma_device_type": 2 00:11:52.037 } 00:11:52.037 ], 00:11:52.037 "driver_specific": {} 00:11:52.037 } 00:11:52.037 ] 00:11:52.037 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:52.037 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:52.037 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:52.037 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.295 BaseBdev3 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:52.295 10:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.552 10:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.552 [ 00:11:52.552 { 00:11:52.552 "name": "BaseBdev3", 00:11:52.552 "aliases": [ 00:11:52.552 "8ea9955c-182a-49b7-adf2-3e84cae51622" 00:11:52.552 ], 00:11:52.552 "product_name": "Malloc disk", 00:11:52.552 "block_size": 512, 00:11:52.552 "num_blocks": 65536, 00:11:52.552 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:52.552 "assigned_rate_limits": { 00:11:52.552 "rw_ios_per_sec": 0, 00:11:52.552 "rw_mbytes_per_sec": 0, 00:11:52.552 "r_mbytes_per_sec": 0, 00:11:52.552 "w_mbytes_per_sec": 0 00:11:52.552 }, 00:11:52.552 "claimed": false, 00:11:52.552 "zoned": false, 00:11:52.552 "supported_io_types": { 00:11:52.552 "read": true, 00:11:52.552 "write": true, 00:11:52.552 "unmap": true, 00:11:52.552 "flush": true, 00:11:52.552 "reset": true, 00:11:52.552 "nvme_admin": false, 00:11:52.552 "nvme_io": false, 00:11:52.552 "nvme_io_md": false, 00:11:52.552 "write_zeroes": true, 00:11:52.552 "zcopy": true, 00:11:52.552 "get_zone_info": false, 00:11:52.552 "zone_management": false, 00:11:52.552 "zone_append": false, 00:11:52.552 "compare": false, 00:11:52.552 "compare_and_write": false, 00:11:52.552 "abort": true, 00:11:52.552 "seek_hole": false, 00:11:52.552 "seek_data": false, 00:11:52.552 "copy": true, 00:11:52.552 "nvme_iov_md": false 00:11:52.552 }, 00:11:52.552 "memory_domains": [ 00:11:52.552 { 00:11:52.552 "dma_device_id": "system", 00:11:52.552 "dma_device_type": 1 00:11:52.552 }, 00:11:52.552 { 00:11:52.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.552 "dma_device_type": 2 00:11:52.552 } 00:11:52.552 ], 00:11:52.552 "driver_specific": {} 00:11:52.552 } 00:11:52.552 ] 00:11:52.552 10:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:52.552 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:52.552 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:52.552 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:52.809 [2024-07-15 10:19:17.454273] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.809 [2024-07-15 10:19:17.454302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.809 [2024-07-15 10:19:17.454313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.809 [2024-07-15 10:19:17.455178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.809 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.068 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.068 "name": "Existed_Raid", 00:11:53.068 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:53.068 "strip_size_kb": 64, 00:11:53.068 "state": "configuring", 00:11:53.068 "raid_level": "raid0", 00:11:53.068 "superblock": true, 00:11:53.068 "num_base_bdevs": 3, 00:11:53.068 "num_base_bdevs_discovered": 2, 00:11:53.068 "num_base_bdevs_operational": 3, 00:11:53.068 "base_bdevs_list": [ 00:11:53.068 { 00:11:53.068 "name": "BaseBdev1", 00:11:53.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.068 "is_configured": false, 00:11:53.068 "data_offset": 0, 00:11:53.068 "data_size": 0 00:11:53.068 }, 00:11:53.068 { 00:11:53.068 "name": "BaseBdev2", 00:11:53.068 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:53.068 "is_configured": true, 00:11:53.068 "data_offset": 2048, 00:11:53.068 "data_size": 63488 00:11:53.068 }, 00:11:53.068 { 00:11:53.068 "name": "BaseBdev3", 00:11:53.068 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:53.068 "is_configured": true, 00:11:53.068 "data_offset": 2048, 00:11:53.068 "data_size": 63488 00:11:53.068 } 00:11:53.068 ] 00:11:53.068 }' 00:11:53.068 10:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.068 10:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:53.634 [2024-07-15 10:19:18.284393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.634 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.893 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.893 "name": "Existed_Raid", 00:11:53.893 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:53.893 "strip_size_kb": 64, 00:11:53.893 "state": "configuring", 00:11:53.893 "raid_level": "raid0", 00:11:53.893 "superblock": true, 00:11:53.893 "num_base_bdevs": 3, 00:11:53.893 "num_base_bdevs_discovered": 1, 00:11:53.893 "num_base_bdevs_operational": 3, 00:11:53.893 "base_bdevs_list": [ 00:11:53.893 { 00:11:53.893 "name": "BaseBdev1", 00:11:53.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.893 "is_configured": false, 00:11:53.893 "data_offset": 0, 00:11:53.893 "data_size": 0 00:11:53.893 }, 00:11:53.893 { 00:11:53.893 "name": null, 00:11:53.893 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:53.893 "is_configured": false, 00:11:53.893 "data_offset": 2048, 00:11:53.893 "data_size": 63488 00:11:53.893 }, 00:11:53.893 { 00:11:53.893 "name": "BaseBdev3", 00:11:53.893 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:53.893 "is_configured": true, 00:11:53.893 "data_offset": 2048, 00:11:53.893 "data_size": 63488 00:11:53.893 } 00:11:53.893 ] 00:11:53.893 }' 00:11:53.893 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.893 10:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.151 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.151 10:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:54.410 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:54.410 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.669 [2024-07-15 10:19:19.265671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.669 BaseBdev1 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.669 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.927 [ 00:11:54.927 { 00:11:54.927 "name": "BaseBdev1", 00:11:54.927 "aliases": [ 00:11:54.928 "ce4fd1f3-4375-419d-9894-8338903392ac" 00:11:54.928 ], 00:11:54.928 "product_name": "Malloc disk", 00:11:54.928 "block_size": 512, 00:11:54.928 "num_blocks": 65536, 00:11:54.928 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:11:54.928 "assigned_rate_limits": { 00:11:54.928 "rw_ios_per_sec": 0, 00:11:54.928 "rw_mbytes_per_sec": 0, 00:11:54.928 "r_mbytes_per_sec": 0, 00:11:54.928 "w_mbytes_per_sec": 0 00:11:54.928 }, 00:11:54.928 "claimed": true, 00:11:54.928 "claim_type": "exclusive_write", 00:11:54.928 "zoned": false, 00:11:54.928 "supported_io_types": { 00:11:54.928 "read": true, 00:11:54.928 "write": true, 00:11:54.928 "unmap": true, 00:11:54.928 "flush": true, 00:11:54.928 "reset": true, 00:11:54.928 "nvme_admin": false, 00:11:54.928 "nvme_io": false, 00:11:54.928 "nvme_io_md": false, 00:11:54.928 "write_zeroes": true, 00:11:54.928 "zcopy": true, 00:11:54.928 "get_zone_info": false, 00:11:54.928 "zone_management": false, 00:11:54.928 "zone_append": false, 00:11:54.928 "compare": false, 00:11:54.928 "compare_and_write": false, 00:11:54.928 "abort": true, 00:11:54.928 "seek_hole": false, 00:11:54.928 "seek_data": false, 00:11:54.928 "copy": true, 00:11:54.928 "nvme_iov_md": false 00:11:54.928 }, 00:11:54.928 "memory_domains": [ 00:11:54.928 { 00:11:54.928 "dma_device_id": "system", 00:11:54.928 "dma_device_type": 1 00:11:54.928 }, 00:11:54.928 { 00:11:54.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.928 "dma_device_type": 2 00:11:54.928 } 00:11:54.928 ], 00:11:54.928 "driver_specific": {} 00:11:54.928 } 00:11:54.928 ] 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.928 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.186 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.186 "name": "Existed_Raid", 00:11:55.186 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:55.186 "strip_size_kb": 64, 00:11:55.186 "state": "configuring", 00:11:55.186 "raid_level": "raid0", 00:11:55.186 "superblock": true, 00:11:55.186 "num_base_bdevs": 3, 00:11:55.186 "num_base_bdevs_discovered": 2, 00:11:55.186 "num_base_bdevs_operational": 3, 00:11:55.186 "base_bdevs_list": [ 00:11:55.186 { 00:11:55.186 "name": "BaseBdev1", 00:11:55.186 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:11:55.186 "is_configured": true, 00:11:55.186 "data_offset": 2048, 00:11:55.186 "data_size": 63488 00:11:55.186 }, 00:11:55.186 { 00:11:55.186 "name": null, 00:11:55.186 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:55.186 "is_configured": false, 00:11:55.186 "data_offset": 2048, 00:11:55.186 "data_size": 63488 00:11:55.186 }, 00:11:55.186 { 00:11:55.186 "name": "BaseBdev3", 00:11:55.186 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:55.186 "is_configured": true, 00:11:55.186 "data_offset": 2048, 00:11:55.186 "data_size": 63488 00:11:55.186 } 00:11:55.186 ] 00:11:55.186 }' 00:11:55.186 10:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.186 10:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.785 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:55.785 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.785 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:55.785 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:56.055 [2024-07-15 10:19:20.597113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.055 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.055 "name": "Existed_Raid", 00:11:56.055 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:56.055 "strip_size_kb": 64, 00:11:56.055 "state": "configuring", 00:11:56.055 "raid_level": "raid0", 00:11:56.055 "superblock": true, 00:11:56.055 "num_base_bdevs": 3, 00:11:56.055 "num_base_bdevs_discovered": 1, 00:11:56.055 "num_base_bdevs_operational": 3, 00:11:56.055 "base_bdevs_list": [ 00:11:56.055 { 00:11:56.055 "name": "BaseBdev1", 00:11:56.055 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:11:56.055 "is_configured": true, 00:11:56.055 "data_offset": 2048, 00:11:56.055 "data_size": 63488 00:11:56.055 }, 00:11:56.055 { 00:11:56.055 "name": null, 00:11:56.055 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:56.055 "is_configured": false, 00:11:56.055 "data_offset": 2048, 00:11:56.055 "data_size": 63488 00:11:56.056 }, 00:11:56.056 { 00:11:56.056 "name": null, 00:11:56.056 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:56.056 "is_configured": false, 00:11:56.056 "data_offset": 2048, 00:11:56.056 "data_size": 63488 00:11:56.056 } 00:11:56.056 ] 00:11:56.056 }' 00:11:56.056 10:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.056 10:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.623 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.623 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:56.623 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:56.623 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:56.882 [2024-07-15 10:19:21.555597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.882 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.141 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.141 "name": "Existed_Raid", 00:11:57.141 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:57.141 "strip_size_kb": 64, 00:11:57.141 "state": "configuring", 00:11:57.141 "raid_level": "raid0", 00:11:57.141 "superblock": true, 00:11:57.141 "num_base_bdevs": 3, 00:11:57.141 "num_base_bdevs_discovered": 2, 00:11:57.141 "num_base_bdevs_operational": 3, 00:11:57.141 "base_bdevs_list": [ 00:11:57.141 { 00:11:57.141 "name": "BaseBdev1", 00:11:57.141 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:11:57.141 "is_configured": true, 00:11:57.141 "data_offset": 2048, 00:11:57.141 "data_size": 63488 00:11:57.141 }, 00:11:57.141 { 00:11:57.141 "name": null, 00:11:57.141 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:57.141 "is_configured": false, 00:11:57.141 "data_offset": 2048, 00:11:57.141 "data_size": 63488 00:11:57.141 }, 00:11:57.141 { 00:11:57.141 "name": "BaseBdev3", 00:11:57.141 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:57.141 "is_configured": true, 00:11:57.141 "data_offset": 2048, 00:11:57.141 "data_size": 63488 00:11:57.141 } 00:11:57.141 ] 00:11:57.141 }' 00:11:57.141 10:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.141 10:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.708 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.708 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.708 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:57.708 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.967 [2024-07-15 10:19:22.582242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.967 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.225 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.225 "name": "Existed_Raid", 00:11:58.225 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:58.225 "strip_size_kb": 64, 00:11:58.225 "state": "configuring", 00:11:58.225 "raid_level": "raid0", 00:11:58.225 "superblock": true, 00:11:58.225 "num_base_bdevs": 3, 00:11:58.225 "num_base_bdevs_discovered": 1, 00:11:58.225 "num_base_bdevs_operational": 3, 00:11:58.225 "base_bdevs_list": [ 00:11:58.225 { 00:11:58.225 "name": null, 00:11:58.225 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:11:58.225 "is_configured": false, 00:11:58.225 "data_offset": 2048, 00:11:58.225 "data_size": 63488 00:11:58.225 }, 00:11:58.225 { 00:11:58.225 "name": null, 00:11:58.225 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:58.225 "is_configured": false, 00:11:58.225 "data_offset": 2048, 00:11:58.225 "data_size": 63488 00:11:58.225 }, 00:11:58.225 { 00:11:58.225 "name": "BaseBdev3", 00:11:58.225 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:58.225 "is_configured": true, 00:11:58.225 "data_offset": 2048, 00:11:58.225 "data_size": 63488 00:11:58.225 } 00:11:58.225 ] 00:11:58.225 }' 00:11:58.225 10:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.225 10:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.483 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.483 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.741 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:58.741 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:58.998 [2024-07-15 10:19:23.566576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.998 "name": "Existed_Raid", 00:11:58.998 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:11:58.998 "strip_size_kb": 64, 00:11:58.998 "state": "configuring", 00:11:58.998 "raid_level": "raid0", 00:11:58.998 "superblock": true, 00:11:58.998 "num_base_bdevs": 3, 00:11:58.998 "num_base_bdevs_discovered": 2, 00:11:58.998 "num_base_bdevs_operational": 3, 00:11:58.998 "base_bdevs_list": [ 00:11:58.998 { 00:11:58.998 "name": null, 00:11:58.998 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:11:58.998 "is_configured": false, 00:11:58.998 "data_offset": 2048, 00:11:58.998 "data_size": 63488 00:11:58.998 }, 00:11:58.998 { 00:11:58.998 "name": "BaseBdev2", 00:11:58.998 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:11:58.998 "is_configured": true, 00:11:58.998 "data_offset": 2048, 00:11:58.998 "data_size": 63488 00:11:58.998 }, 00:11:58.998 { 00:11:58.998 "name": "BaseBdev3", 00:11:58.998 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:11:58.998 "is_configured": true, 00:11:58.998 "data_offset": 2048, 00:11:58.998 "data_size": 63488 00:11:58.998 } 00:11:58.998 ] 00:11:58.998 }' 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.998 10:19:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.563 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.563 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.822 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:59.822 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.822 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:59.822 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ce4fd1f3-4375-419d-9894-8338903392ac 00:12:00.080 [2024-07-15 10:19:24.732460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:00.080 [2024-07-15 10:19:24.732582] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfe63e0 00:12:00.080 [2024-07-15 10:19:24.732591] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:00.080 [2024-07-15 10:19:24.732710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xff0190 00:12:00.080 [2024-07-15 10:19:24.732786] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfe63e0 00:12:00.080 [2024-07-15 10:19:24.732794] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfe63e0 00:12:00.080 [2024-07-15 10:19:24.732854] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.080 NewBaseBdev 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:00.080 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.339 10:19:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:00.339 [ 00:12:00.339 { 00:12:00.339 "name": "NewBaseBdev", 00:12:00.339 "aliases": [ 00:12:00.339 "ce4fd1f3-4375-419d-9894-8338903392ac" 00:12:00.339 ], 00:12:00.339 "product_name": "Malloc disk", 00:12:00.339 "block_size": 512, 00:12:00.339 "num_blocks": 65536, 00:12:00.339 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:12:00.339 "assigned_rate_limits": { 00:12:00.339 "rw_ios_per_sec": 0, 00:12:00.339 "rw_mbytes_per_sec": 0, 00:12:00.339 "r_mbytes_per_sec": 0, 00:12:00.339 "w_mbytes_per_sec": 0 00:12:00.339 }, 00:12:00.339 "claimed": true, 00:12:00.339 "claim_type": "exclusive_write", 00:12:00.339 "zoned": false, 00:12:00.339 "supported_io_types": { 00:12:00.339 "read": true, 00:12:00.339 "write": true, 00:12:00.339 "unmap": true, 00:12:00.339 "flush": true, 00:12:00.339 "reset": true, 00:12:00.339 "nvme_admin": false, 00:12:00.339 "nvme_io": false, 00:12:00.339 "nvme_io_md": false, 00:12:00.339 "write_zeroes": true, 00:12:00.339 "zcopy": true, 00:12:00.339 "get_zone_info": false, 00:12:00.339 "zone_management": false, 00:12:00.339 "zone_append": false, 00:12:00.339 "compare": false, 00:12:00.339 "compare_and_write": false, 00:12:00.339 "abort": true, 00:12:00.339 "seek_hole": false, 00:12:00.339 "seek_data": false, 00:12:00.339 "copy": true, 00:12:00.339 "nvme_iov_md": false 00:12:00.339 }, 00:12:00.339 "memory_domains": [ 00:12:00.339 { 00:12:00.339 "dma_device_id": "system", 00:12:00.339 "dma_device_type": 1 00:12:00.339 }, 00:12:00.339 { 00:12:00.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.339 "dma_device_type": 2 00:12:00.339 } 00:12:00.339 ], 00:12:00.339 "driver_specific": {} 00:12:00.339 } 00:12:00.339 ] 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.339 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.597 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.597 "name": "Existed_Raid", 00:12:00.597 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:12:00.597 "strip_size_kb": 64, 00:12:00.597 "state": "online", 00:12:00.597 "raid_level": "raid0", 00:12:00.597 "superblock": true, 00:12:00.598 "num_base_bdevs": 3, 00:12:00.598 "num_base_bdevs_discovered": 3, 00:12:00.598 "num_base_bdevs_operational": 3, 00:12:00.598 "base_bdevs_list": [ 00:12:00.598 { 00:12:00.598 "name": "NewBaseBdev", 00:12:00.598 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:12:00.598 "is_configured": true, 00:12:00.598 "data_offset": 2048, 00:12:00.598 "data_size": 63488 00:12:00.598 }, 00:12:00.598 { 00:12:00.598 "name": "BaseBdev2", 00:12:00.598 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:12:00.598 "is_configured": true, 00:12:00.598 "data_offset": 2048, 00:12:00.598 "data_size": 63488 00:12:00.598 }, 00:12:00.598 { 00:12:00.598 "name": "BaseBdev3", 00:12:00.598 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:12:00.598 "is_configured": true, 00:12:00.598 "data_offset": 2048, 00:12:00.598 "data_size": 63488 00:12:00.598 } 00:12:00.598 ] 00:12:00.598 }' 00:12:00.598 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.598 10:19:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:01.165 [2024-07-15 10:19:25.887613] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:01.165 "name": "Existed_Raid", 00:12:01.165 "aliases": [ 00:12:01.165 "5245ce77-ca71-4a38-8891-3c5e6cdb04a9" 00:12:01.165 ], 00:12:01.165 "product_name": "Raid Volume", 00:12:01.165 "block_size": 512, 00:12:01.165 "num_blocks": 190464, 00:12:01.165 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:12:01.165 "assigned_rate_limits": { 00:12:01.165 "rw_ios_per_sec": 0, 00:12:01.165 "rw_mbytes_per_sec": 0, 00:12:01.165 "r_mbytes_per_sec": 0, 00:12:01.165 "w_mbytes_per_sec": 0 00:12:01.165 }, 00:12:01.165 "claimed": false, 00:12:01.165 "zoned": false, 00:12:01.165 "supported_io_types": { 00:12:01.165 "read": true, 00:12:01.165 "write": true, 00:12:01.165 "unmap": true, 00:12:01.165 "flush": true, 00:12:01.165 "reset": true, 00:12:01.165 "nvme_admin": false, 00:12:01.165 "nvme_io": false, 00:12:01.165 "nvme_io_md": false, 00:12:01.165 "write_zeroes": true, 00:12:01.165 "zcopy": false, 00:12:01.165 "get_zone_info": false, 00:12:01.165 "zone_management": false, 00:12:01.165 "zone_append": false, 00:12:01.165 "compare": false, 00:12:01.165 "compare_and_write": false, 00:12:01.165 "abort": false, 00:12:01.165 "seek_hole": false, 00:12:01.165 "seek_data": false, 00:12:01.165 "copy": false, 00:12:01.165 "nvme_iov_md": false 00:12:01.165 }, 00:12:01.165 "memory_domains": [ 00:12:01.165 { 00:12:01.165 "dma_device_id": "system", 00:12:01.165 "dma_device_type": 1 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.165 "dma_device_type": 2 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "dma_device_id": "system", 00:12:01.165 "dma_device_type": 1 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.165 "dma_device_type": 2 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "dma_device_id": "system", 00:12:01.165 "dma_device_type": 1 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.165 "dma_device_type": 2 00:12:01.165 } 00:12:01.165 ], 00:12:01.165 "driver_specific": { 00:12:01.165 "raid": { 00:12:01.165 "uuid": "5245ce77-ca71-4a38-8891-3c5e6cdb04a9", 00:12:01.165 "strip_size_kb": 64, 00:12:01.165 "state": "online", 00:12:01.165 "raid_level": "raid0", 00:12:01.165 "superblock": true, 00:12:01.165 "num_base_bdevs": 3, 00:12:01.165 "num_base_bdevs_discovered": 3, 00:12:01.165 "num_base_bdevs_operational": 3, 00:12:01.165 "base_bdevs_list": [ 00:12:01.165 { 00:12:01.165 "name": "NewBaseBdev", 00:12:01.165 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:12:01.165 "is_configured": true, 00:12:01.165 "data_offset": 2048, 00:12:01.165 "data_size": 63488 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "name": "BaseBdev2", 00:12:01.165 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:12:01.165 "is_configured": true, 00:12:01.165 "data_offset": 2048, 00:12:01.165 "data_size": 63488 00:12:01.165 }, 00:12:01.165 { 00:12:01.165 "name": "BaseBdev3", 00:12:01.165 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:12:01.165 "is_configured": true, 00:12:01.165 "data_offset": 2048, 00:12:01.165 "data_size": 63488 00:12:01.165 } 00:12:01.165 ] 00:12:01.165 } 00:12:01.165 } 00:12:01.165 }' 00:12:01.165 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.424 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:01.424 BaseBdev2 00:12:01.424 BaseBdev3' 00:12:01.424 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:01.424 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:01.424 10:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:01.424 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:01.424 "name": "NewBaseBdev", 00:12:01.424 "aliases": [ 00:12:01.424 "ce4fd1f3-4375-419d-9894-8338903392ac" 00:12:01.424 ], 00:12:01.424 "product_name": "Malloc disk", 00:12:01.424 "block_size": 512, 00:12:01.424 "num_blocks": 65536, 00:12:01.424 "uuid": "ce4fd1f3-4375-419d-9894-8338903392ac", 00:12:01.424 "assigned_rate_limits": { 00:12:01.424 "rw_ios_per_sec": 0, 00:12:01.424 "rw_mbytes_per_sec": 0, 00:12:01.424 "r_mbytes_per_sec": 0, 00:12:01.424 "w_mbytes_per_sec": 0 00:12:01.424 }, 00:12:01.424 "claimed": true, 00:12:01.424 "claim_type": "exclusive_write", 00:12:01.424 "zoned": false, 00:12:01.424 "supported_io_types": { 00:12:01.424 "read": true, 00:12:01.424 "write": true, 00:12:01.424 "unmap": true, 00:12:01.424 "flush": true, 00:12:01.424 "reset": true, 00:12:01.424 "nvme_admin": false, 00:12:01.424 "nvme_io": false, 00:12:01.424 "nvme_io_md": false, 00:12:01.424 "write_zeroes": true, 00:12:01.424 "zcopy": true, 00:12:01.424 "get_zone_info": false, 00:12:01.424 "zone_management": false, 00:12:01.424 "zone_append": false, 00:12:01.424 "compare": false, 00:12:01.424 "compare_and_write": false, 00:12:01.424 "abort": true, 00:12:01.424 "seek_hole": false, 00:12:01.424 "seek_data": false, 00:12:01.424 "copy": true, 00:12:01.424 "nvme_iov_md": false 00:12:01.424 }, 00:12:01.424 "memory_domains": [ 00:12:01.424 { 00:12:01.424 "dma_device_id": "system", 00:12:01.424 "dma_device_type": 1 00:12:01.424 }, 00:12:01.424 { 00:12:01.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.424 "dma_device_type": 2 00:12:01.424 } 00:12:01.424 ], 00:12:01.424 "driver_specific": {} 00:12:01.424 }' 00:12:01.424 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.424 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.424 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:01.424 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:01.683 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:01.941 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:01.941 "name": "BaseBdev2", 00:12:01.941 "aliases": [ 00:12:01.941 "04b981b8-7019-42f7-bf26-73702710a4a2" 00:12:01.941 ], 00:12:01.941 "product_name": "Malloc disk", 00:12:01.941 "block_size": 512, 00:12:01.941 "num_blocks": 65536, 00:12:01.941 "uuid": "04b981b8-7019-42f7-bf26-73702710a4a2", 00:12:01.941 "assigned_rate_limits": { 00:12:01.941 "rw_ios_per_sec": 0, 00:12:01.941 "rw_mbytes_per_sec": 0, 00:12:01.941 "r_mbytes_per_sec": 0, 00:12:01.941 "w_mbytes_per_sec": 0 00:12:01.941 }, 00:12:01.941 "claimed": true, 00:12:01.941 "claim_type": "exclusive_write", 00:12:01.941 "zoned": false, 00:12:01.941 "supported_io_types": { 00:12:01.941 "read": true, 00:12:01.941 "write": true, 00:12:01.941 "unmap": true, 00:12:01.941 "flush": true, 00:12:01.941 "reset": true, 00:12:01.941 "nvme_admin": false, 00:12:01.941 "nvme_io": false, 00:12:01.941 "nvme_io_md": false, 00:12:01.941 "write_zeroes": true, 00:12:01.941 "zcopy": true, 00:12:01.941 "get_zone_info": false, 00:12:01.941 "zone_management": false, 00:12:01.941 "zone_append": false, 00:12:01.941 "compare": false, 00:12:01.941 "compare_and_write": false, 00:12:01.941 "abort": true, 00:12:01.941 "seek_hole": false, 00:12:01.941 "seek_data": false, 00:12:01.941 "copy": true, 00:12:01.941 "nvme_iov_md": false 00:12:01.941 }, 00:12:01.941 "memory_domains": [ 00:12:01.941 { 00:12:01.941 "dma_device_id": "system", 00:12:01.941 "dma_device_type": 1 00:12:01.941 }, 00:12:01.941 { 00:12:01.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.941 "dma_device_type": 2 00:12:01.941 } 00:12:01.941 ], 00:12:01.941 "driver_specific": {} 00:12:01.941 }' 00:12:01.941 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.941 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:01.941 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:01.941 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:01.941 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:02.198 10:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:02.455 "name": "BaseBdev3", 00:12:02.455 "aliases": [ 00:12:02.455 "8ea9955c-182a-49b7-adf2-3e84cae51622" 00:12:02.455 ], 00:12:02.455 "product_name": "Malloc disk", 00:12:02.455 "block_size": 512, 00:12:02.455 "num_blocks": 65536, 00:12:02.455 "uuid": "8ea9955c-182a-49b7-adf2-3e84cae51622", 00:12:02.455 "assigned_rate_limits": { 00:12:02.455 "rw_ios_per_sec": 0, 00:12:02.455 "rw_mbytes_per_sec": 0, 00:12:02.455 "r_mbytes_per_sec": 0, 00:12:02.455 "w_mbytes_per_sec": 0 00:12:02.455 }, 00:12:02.455 "claimed": true, 00:12:02.455 "claim_type": "exclusive_write", 00:12:02.455 "zoned": false, 00:12:02.455 "supported_io_types": { 00:12:02.455 "read": true, 00:12:02.455 "write": true, 00:12:02.455 "unmap": true, 00:12:02.455 "flush": true, 00:12:02.455 "reset": true, 00:12:02.455 "nvme_admin": false, 00:12:02.455 "nvme_io": false, 00:12:02.455 "nvme_io_md": false, 00:12:02.455 "write_zeroes": true, 00:12:02.455 "zcopy": true, 00:12:02.455 "get_zone_info": false, 00:12:02.455 "zone_management": false, 00:12:02.455 "zone_append": false, 00:12:02.455 "compare": false, 00:12:02.455 "compare_and_write": false, 00:12:02.455 "abort": true, 00:12:02.455 "seek_hole": false, 00:12:02.455 "seek_data": false, 00:12:02.455 "copy": true, 00:12:02.455 "nvme_iov_md": false 00:12:02.455 }, 00:12:02.455 "memory_domains": [ 00:12:02.455 { 00:12:02.455 "dma_device_id": "system", 00:12:02.455 "dma_device_type": 1 00:12:02.455 }, 00:12:02.455 { 00:12:02.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.455 "dma_device_type": 2 00:12:02.455 } 00:12:02.455 ], 00:12:02.455 "driver_specific": {} 00:12:02.455 }' 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.455 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:02.713 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:02.713 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.713 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:02.713 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:02.713 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:02.971 [2024-07-15 10:19:27.503635] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.971 [2024-07-15 10:19:27.503654] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.971 [2024-07-15 10:19:27.503694] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.971 [2024-07-15 10:19:27.503729] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.971 [2024-07-15 10:19:27.503737] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfe63e0 name Existed_Raid, state offline 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1768306 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1768306 ']' 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1768306 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1768306 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1768306' 00:12:02.971 killing process with pid 1768306 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1768306 00:12:02.971 [2024-07-15 10:19:27.570696] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.971 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1768306 00:12:02.971 [2024-07-15 10:19:27.592997] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.229 10:19:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:03.229 00:12:03.229 real 0m21.360s 00:12:03.229 user 0m39.126s 00:12:03.229 sys 0m4.019s 00:12:03.229 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.229 10:19:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.229 ************************************ 00:12:03.229 END TEST raid_state_function_test_sb 00:12:03.229 ************************************ 00:12:03.229 10:19:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:03.229 10:19:27 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:03.229 10:19:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:03.229 10:19:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.229 10:19:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.229 ************************************ 00:12:03.229 START TEST raid_superblock_test 00:12:03.229 ************************************ 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1772581 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1772581 /var/tmp/spdk-raid.sock 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1772581 ']' 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:03.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.229 10:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.229 [2024-07-15 10:19:27.907204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:03.229 [2024-07-15 10:19:27.907249] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772581 ] 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:03.229 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.229 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:03.229 [2024-07-15 10:19:27.998632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.487 [2024-07-15 10:19:28.072302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.487 [2024-07-15 10:19:28.121113] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.487 [2024-07-15 10:19:28.121139] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.053 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:04.310 malloc1 00:12:04.310 10:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.310 [2024-07-15 10:19:29.012668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.310 [2024-07-15 10:19:29.012705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.310 [2024-07-15 10:19:29.012724] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a4c2f0 00:12:04.310 [2024-07-15 10:19:29.012735] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.310 [2024-07-15 10:19:29.013923] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.310 [2024-07-15 10:19:29.013947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.310 pt1 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.310 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:04.566 malloc2 00:12:04.566 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.823 [2024-07-15 10:19:29.369733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.823 [2024-07-15 10:19:29.369771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.823 [2024-07-15 10:19:29.369788] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a4d6d0 00:12:04.823 [2024-07-15 10:19:29.369800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.823 [2024-07-15 10:19:29.371015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.823 [2024-07-15 10:19:29.371039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.823 pt2 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:04.823 malloc3 00:12:04.823 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.080 [2024-07-15 10:19:29.710139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.080 [2024-07-15 10:19:29.710174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.080 [2024-07-15 10:19:29.710191] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be66b0 00:12:05.080 [2024-07-15 10:19:29.710201] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.080 [2024-07-15 10:19:29.711244] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.080 [2024-07-15 10:19:29.711268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.080 pt3 00:12:05.080 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:05.080 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:05.080 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:05.080 [2024-07-15 10:19:29.866562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.080 [2024-07-15 10:19:29.867408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.080 [2024-07-15 10:19:29.867446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.080 [2024-07-15 10:19:29.867549] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1be6cb0 00:12:05.080 [2024-07-15 10:19:29.867556] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:05.080 [2024-07-15 10:19:29.867687] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be5270 00:12:05.080 [2024-07-15 10:19:29.867785] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1be6cb0 00:12:05.080 [2024-07-15 10:19:29.867791] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1be6cb0 00:12:05.080 [2024-07-15 10:19:29.867859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.337 10:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.337 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.337 "name": "raid_bdev1", 00:12:05.337 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:05.337 "strip_size_kb": 64, 00:12:05.337 "state": "online", 00:12:05.337 "raid_level": "raid0", 00:12:05.337 "superblock": true, 00:12:05.337 "num_base_bdevs": 3, 00:12:05.337 "num_base_bdevs_discovered": 3, 00:12:05.337 "num_base_bdevs_operational": 3, 00:12:05.337 "base_bdevs_list": [ 00:12:05.337 { 00:12:05.337 "name": "pt1", 00:12:05.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.337 "is_configured": true, 00:12:05.337 "data_offset": 2048, 00:12:05.337 "data_size": 63488 00:12:05.337 }, 00:12:05.337 { 00:12:05.337 "name": "pt2", 00:12:05.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.337 "is_configured": true, 00:12:05.337 "data_offset": 2048, 00:12:05.337 "data_size": 63488 00:12:05.337 }, 00:12:05.337 { 00:12:05.337 "name": "pt3", 00:12:05.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.337 "is_configured": true, 00:12:05.337 "data_offset": 2048, 00:12:05.337 "data_size": 63488 00:12:05.337 } 00:12:05.337 ] 00:12:05.337 }' 00:12:05.337 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.337 10:19:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:05.900 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:06.158 [2024-07-15 10:19:30.700849] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:06.158 "name": "raid_bdev1", 00:12:06.158 "aliases": [ 00:12:06.158 "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0" 00:12:06.158 ], 00:12:06.158 "product_name": "Raid Volume", 00:12:06.158 "block_size": 512, 00:12:06.158 "num_blocks": 190464, 00:12:06.158 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:06.158 "assigned_rate_limits": { 00:12:06.158 "rw_ios_per_sec": 0, 00:12:06.158 "rw_mbytes_per_sec": 0, 00:12:06.158 "r_mbytes_per_sec": 0, 00:12:06.158 "w_mbytes_per_sec": 0 00:12:06.158 }, 00:12:06.158 "claimed": false, 00:12:06.158 "zoned": false, 00:12:06.158 "supported_io_types": { 00:12:06.158 "read": true, 00:12:06.158 "write": true, 00:12:06.158 "unmap": true, 00:12:06.158 "flush": true, 00:12:06.158 "reset": true, 00:12:06.158 "nvme_admin": false, 00:12:06.158 "nvme_io": false, 00:12:06.158 "nvme_io_md": false, 00:12:06.158 "write_zeroes": true, 00:12:06.158 "zcopy": false, 00:12:06.158 "get_zone_info": false, 00:12:06.158 "zone_management": false, 00:12:06.158 "zone_append": false, 00:12:06.158 "compare": false, 00:12:06.158 "compare_and_write": false, 00:12:06.158 "abort": false, 00:12:06.158 "seek_hole": false, 00:12:06.158 "seek_data": false, 00:12:06.158 "copy": false, 00:12:06.158 "nvme_iov_md": false 00:12:06.158 }, 00:12:06.158 "memory_domains": [ 00:12:06.158 { 00:12:06.158 "dma_device_id": "system", 00:12:06.158 "dma_device_type": 1 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.158 "dma_device_type": 2 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "dma_device_id": "system", 00:12:06.158 "dma_device_type": 1 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.158 "dma_device_type": 2 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "dma_device_id": "system", 00:12:06.158 "dma_device_type": 1 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.158 "dma_device_type": 2 00:12:06.158 } 00:12:06.158 ], 00:12:06.158 "driver_specific": { 00:12:06.158 "raid": { 00:12:06.158 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:06.158 "strip_size_kb": 64, 00:12:06.158 "state": "online", 00:12:06.158 "raid_level": "raid0", 00:12:06.158 "superblock": true, 00:12:06.158 "num_base_bdevs": 3, 00:12:06.158 "num_base_bdevs_discovered": 3, 00:12:06.158 "num_base_bdevs_operational": 3, 00:12:06.158 "base_bdevs_list": [ 00:12:06.158 { 00:12:06.158 "name": "pt1", 00:12:06.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.158 "is_configured": true, 00:12:06.158 "data_offset": 2048, 00:12:06.158 "data_size": 63488 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "name": "pt2", 00:12:06.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.158 "is_configured": true, 00:12:06.158 "data_offset": 2048, 00:12:06.158 "data_size": 63488 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "name": "pt3", 00:12:06.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.158 "is_configured": true, 00:12:06.158 "data_offset": 2048, 00:12:06.158 "data_size": 63488 00:12:06.158 } 00:12:06.158 ] 00:12:06.158 } 00:12:06.158 } 00:12:06.158 }' 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:06.158 pt2 00:12:06.158 pt3' 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.158 "name": "pt1", 00:12:06.158 "aliases": [ 00:12:06.158 "00000000-0000-0000-0000-000000000001" 00:12:06.158 ], 00:12:06.158 "product_name": "passthru", 00:12:06.158 "block_size": 512, 00:12:06.158 "num_blocks": 65536, 00:12:06.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.158 "assigned_rate_limits": { 00:12:06.158 "rw_ios_per_sec": 0, 00:12:06.158 "rw_mbytes_per_sec": 0, 00:12:06.158 "r_mbytes_per_sec": 0, 00:12:06.158 "w_mbytes_per_sec": 0 00:12:06.158 }, 00:12:06.158 "claimed": true, 00:12:06.158 "claim_type": "exclusive_write", 00:12:06.158 "zoned": false, 00:12:06.158 "supported_io_types": { 00:12:06.158 "read": true, 00:12:06.158 "write": true, 00:12:06.158 "unmap": true, 00:12:06.158 "flush": true, 00:12:06.158 "reset": true, 00:12:06.158 "nvme_admin": false, 00:12:06.158 "nvme_io": false, 00:12:06.158 "nvme_io_md": false, 00:12:06.158 "write_zeroes": true, 00:12:06.158 "zcopy": true, 00:12:06.158 "get_zone_info": false, 00:12:06.158 "zone_management": false, 00:12:06.158 "zone_append": false, 00:12:06.158 "compare": false, 00:12:06.158 "compare_and_write": false, 00:12:06.158 "abort": true, 00:12:06.158 "seek_hole": false, 00:12:06.158 "seek_data": false, 00:12:06.158 "copy": true, 00:12:06.158 "nvme_iov_md": false 00:12:06.158 }, 00:12:06.158 "memory_domains": [ 00:12:06.158 { 00:12:06.158 "dma_device_id": "system", 00:12:06.158 "dma_device_type": 1 00:12:06.158 }, 00:12:06.158 { 00:12:06.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.158 "dma_device_type": 2 00:12:06.158 } 00:12:06.158 ], 00:12:06.158 "driver_specific": { 00:12:06.158 "passthru": { 00:12:06.158 "name": "pt1", 00:12:06.158 "base_bdev_name": "malloc1" 00:12:06.158 } 00:12:06.158 } 00:12:06.158 }' 00:12:06.158 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.416 10:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.416 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.673 "name": "pt2", 00:12:06.673 "aliases": [ 00:12:06.673 "00000000-0000-0000-0000-000000000002" 00:12:06.673 ], 00:12:06.673 "product_name": "passthru", 00:12:06.673 "block_size": 512, 00:12:06.673 "num_blocks": 65536, 00:12:06.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.673 "assigned_rate_limits": { 00:12:06.673 "rw_ios_per_sec": 0, 00:12:06.673 "rw_mbytes_per_sec": 0, 00:12:06.673 "r_mbytes_per_sec": 0, 00:12:06.673 "w_mbytes_per_sec": 0 00:12:06.673 }, 00:12:06.673 "claimed": true, 00:12:06.673 "claim_type": "exclusive_write", 00:12:06.673 "zoned": false, 00:12:06.673 "supported_io_types": { 00:12:06.673 "read": true, 00:12:06.673 "write": true, 00:12:06.673 "unmap": true, 00:12:06.673 "flush": true, 00:12:06.673 "reset": true, 00:12:06.673 "nvme_admin": false, 00:12:06.673 "nvme_io": false, 00:12:06.673 "nvme_io_md": false, 00:12:06.673 "write_zeroes": true, 00:12:06.673 "zcopy": true, 00:12:06.673 "get_zone_info": false, 00:12:06.673 "zone_management": false, 00:12:06.673 "zone_append": false, 00:12:06.673 "compare": false, 00:12:06.673 "compare_and_write": false, 00:12:06.673 "abort": true, 00:12:06.673 "seek_hole": false, 00:12:06.673 "seek_data": false, 00:12:06.673 "copy": true, 00:12:06.673 "nvme_iov_md": false 00:12:06.673 }, 00:12:06.673 "memory_domains": [ 00:12:06.673 { 00:12:06.673 "dma_device_id": "system", 00:12:06.673 "dma_device_type": 1 00:12:06.673 }, 00:12:06.673 { 00:12:06.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.673 "dma_device_type": 2 00:12:06.673 } 00:12:06.673 ], 00:12:06.673 "driver_specific": { 00:12:06.673 "passthru": { 00:12:06.673 "name": "pt2", 00:12:06.673 "base_bdev_name": "malloc2" 00:12:06.673 } 00:12:06.673 } 00:12:06.673 }' 00:12:06.673 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.930 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.188 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.188 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:07.188 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:07.188 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:07.188 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:07.188 "name": "pt3", 00:12:07.188 "aliases": [ 00:12:07.188 "00000000-0000-0000-0000-000000000003" 00:12:07.188 ], 00:12:07.188 "product_name": "passthru", 00:12:07.188 "block_size": 512, 00:12:07.188 "num_blocks": 65536, 00:12:07.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.188 "assigned_rate_limits": { 00:12:07.188 "rw_ios_per_sec": 0, 00:12:07.188 "rw_mbytes_per_sec": 0, 00:12:07.188 "r_mbytes_per_sec": 0, 00:12:07.188 "w_mbytes_per_sec": 0 00:12:07.188 }, 00:12:07.188 "claimed": true, 00:12:07.188 "claim_type": "exclusive_write", 00:12:07.188 "zoned": false, 00:12:07.188 "supported_io_types": { 00:12:07.188 "read": true, 00:12:07.188 "write": true, 00:12:07.188 "unmap": true, 00:12:07.188 "flush": true, 00:12:07.188 "reset": true, 00:12:07.188 "nvme_admin": false, 00:12:07.188 "nvme_io": false, 00:12:07.188 "nvme_io_md": false, 00:12:07.188 "write_zeroes": true, 00:12:07.188 "zcopy": true, 00:12:07.188 "get_zone_info": false, 00:12:07.188 "zone_management": false, 00:12:07.188 "zone_append": false, 00:12:07.188 "compare": false, 00:12:07.188 "compare_and_write": false, 00:12:07.188 "abort": true, 00:12:07.188 "seek_hole": false, 00:12:07.188 "seek_data": false, 00:12:07.188 "copy": true, 00:12:07.188 "nvme_iov_md": false 00:12:07.188 }, 00:12:07.188 "memory_domains": [ 00:12:07.188 { 00:12:07.188 "dma_device_id": "system", 00:12:07.188 "dma_device_type": 1 00:12:07.188 }, 00:12:07.188 { 00:12:07.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.188 "dma_device_type": 2 00:12:07.188 } 00:12:07.188 ], 00:12:07.188 "driver_specific": { 00:12:07.188 "passthru": { 00:12:07.188 "name": "pt3", 00:12:07.188 "base_bdev_name": "malloc3" 00:12:07.188 } 00:12:07.188 } 00:12:07.188 }' 00:12:07.188 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.447 10:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.447 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.705 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:07.705 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:07.705 [2024-07-15 10:19:32.401233] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.705 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0 00:12:07.705 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0 ']' 00:12:07.705 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:07.964 [2024-07-15 10:19:32.569483] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.964 [2024-07-15 10:19:32.569495] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.964 [2024-07-15 10:19:32.569526] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.964 [2024-07-15 10:19:32.569563] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.964 [2024-07-15 10:19:32.569571] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be6cb0 name raid_bdev1, state offline 00:12:07.964 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.964 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:08.222 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:08.222 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:08.222 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.222 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:08.222 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.222 10:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:08.517 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:08.517 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:08.517 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:08.517 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:08.798 [2024-07-15 10:19:33.564024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:08.798 [2024-07-15 10:19:33.564966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:08.798 [2024-07-15 10:19:33.564996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:08.798 [2024-07-15 10:19:33.565026] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:08.798 [2024-07-15 10:19:33.565055] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:08.798 [2024-07-15 10:19:33.565068] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:08.798 [2024-07-15 10:19:33.565080] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.798 [2024-07-15 10:19:33.565086] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1befd50 name raid_bdev1, state configuring 00:12:08.798 request: 00:12:08.798 { 00:12:08.798 "name": "raid_bdev1", 00:12:08.798 "raid_level": "raid0", 00:12:08.798 "base_bdevs": [ 00:12:08.798 "malloc1", 00:12:08.798 "malloc2", 00:12:08.798 "malloc3" 00:12:08.798 ], 00:12:08.798 "strip_size_kb": 64, 00:12:08.798 "superblock": false, 00:12:08.798 "method": "bdev_raid_create", 00:12:08.798 "req_id": 1 00:12:08.798 } 00:12:08.798 Got JSON-RPC error response 00:12:08.798 response: 00:12:08.798 { 00:12:08.798 "code": -17, 00:12:08.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:08.798 } 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.798 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:09.056 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:09.056 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:09.056 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:09.314 [2024-07-15 10:19:33.896843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:09.314 [2024-07-15 10:19:33.896878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.314 [2024-07-15 10:19:33.896890] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be3d00 00:12:09.314 [2024-07-15 10:19:33.896921] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.314 [2024-07-15 10:19:33.898245] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.314 [2024-07-15 10:19:33.898269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:09.314 [2024-07-15 10:19:33.898321] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:09.314 [2024-07-15 10:19:33.898339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:09.314 pt1 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.314 10:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.314 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.314 "name": "raid_bdev1", 00:12:09.314 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:09.314 "strip_size_kb": 64, 00:12:09.314 "state": "configuring", 00:12:09.314 "raid_level": "raid0", 00:12:09.314 "superblock": true, 00:12:09.314 "num_base_bdevs": 3, 00:12:09.314 "num_base_bdevs_discovered": 1, 00:12:09.314 "num_base_bdevs_operational": 3, 00:12:09.314 "base_bdevs_list": [ 00:12:09.314 { 00:12:09.314 "name": "pt1", 00:12:09.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.314 "is_configured": true, 00:12:09.314 "data_offset": 2048, 00:12:09.314 "data_size": 63488 00:12:09.314 }, 00:12:09.314 { 00:12:09.314 "name": null, 00:12:09.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.314 "is_configured": false, 00:12:09.314 "data_offset": 2048, 00:12:09.314 "data_size": 63488 00:12:09.314 }, 00:12:09.314 { 00:12:09.314 "name": null, 00:12:09.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.314 "is_configured": false, 00:12:09.314 "data_offset": 2048, 00:12:09.314 "data_size": 63488 00:12:09.314 } 00:12:09.314 ] 00:12:09.314 }' 00:12:09.314 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.314 10:19:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.880 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:09.880 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:09.880 [2024-07-15 10:19:34.666844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:09.880 [2024-07-15 10:19:34.666883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.880 [2024-07-15 10:19:34.666899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be4370 00:12:09.880 [2024-07-15 10:19:34.666910] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.880 [2024-07-15 10:19:34.667146] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.880 [2024-07-15 10:19:34.667158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:09.881 [2024-07-15 10:19:34.667202] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:09.881 [2024-07-15 10:19:34.667215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.140 pt2 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:10.140 [2024-07-15 10:19:34.839285] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.140 10:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.399 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.399 "name": "raid_bdev1", 00:12:10.399 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:10.399 "strip_size_kb": 64, 00:12:10.399 "state": "configuring", 00:12:10.399 "raid_level": "raid0", 00:12:10.399 "superblock": true, 00:12:10.399 "num_base_bdevs": 3, 00:12:10.399 "num_base_bdevs_discovered": 1, 00:12:10.399 "num_base_bdevs_operational": 3, 00:12:10.399 "base_bdevs_list": [ 00:12:10.399 { 00:12:10.399 "name": "pt1", 00:12:10.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:10.399 "is_configured": true, 00:12:10.399 "data_offset": 2048, 00:12:10.399 "data_size": 63488 00:12:10.399 }, 00:12:10.399 { 00:12:10.399 "name": null, 00:12:10.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.399 "is_configured": false, 00:12:10.399 "data_offset": 2048, 00:12:10.399 "data_size": 63488 00:12:10.399 }, 00:12:10.399 { 00:12:10.399 "name": null, 00:12:10.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.399 "is_configured": false, 00:12:10.399 "data_offset": 2048, 00:12:10.399 "data_size": 63488 00:12:10.399 } 00:12:10.399 ] 00:12:10.399 }' 00:12:10.399 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.399 10:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.966 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:10.966 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:10.966 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.966 [2024-07-15 10:19:35.617288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.966 [2024-07-15 10:19:35.617327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.966 [2024-07-15 10:19:35.617341] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a44390 00:12:10.966 [2024-07-15 10:19:35.617349] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.966 [2024-07-15 10:19:35.617588] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.966 [2024-07-15 10:19:35.617600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.966 [2024-07-15 10:19:35.617643] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:10.966 [2024-07-15 10:19:35.617656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.966 pt2 00:12:10.966 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:10.966 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:10.966 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:11.225 [2024-07-15 10:19:35.785729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:11.225 [2024-07-15 10:19:35.785753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.225 [2024-07-15 10:19:35.785763] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a43e20 00:12:11.225 [2024-07-15 10:19:35.785771] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.225 [2024-07-15 10:19:35.785997] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.225 [2024-07-15 10:19:35.786009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:11.225 [2024-07-15 10:19:35.786043] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:11.225 [2024-07-15 10:19:35.786054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:11.225 [2024-07-15 10:19:35.786120] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1be5530 00:12:11.225 [2024-07-15 10:19:35.786126] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:11.225 [2024-07-15 10:19:35.786230] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a48540 00:12:11.225 [2024-07-15 10:19:35.786305] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1be5530 00:12:11.225 [2024-07-15 10:19:35.786310] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1be5530 00:12:11.225 [2024-07-15 10:19:35.786369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.225 pt3 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.225 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.225 "name": "raid_bdev1", 00:12:11.225 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:11.225 "strip_size_kb": 64, 00:12:11.226 "state": "online", 00:12:11.226 "raid_level": "raid0", 00:12:11.226 "superblock": true, 00:12:11.226 "num_base_bdevs": 3, 00:12:11.226 "num_base_bdevs_discovered": 3, 00:12:11.226 "num_base_bdevs_operational": 3, 00:12:11.226 "base_bdevs_list": [ 00:12:11.226 { 00:12:11.226 "name": "pt1", 00:12:11.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:11.226 "is_configured": true, 00:12:11.226 "data_offset": 2048, 00:12:11.226 "data_size": 63488 00:12:11.226 }, 00:12:11.226 { 00:12:11.226 "name": "pt2", 00:12:11.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.226 "is_configured": true, 00:12:11.226 "data_offset": 2048, 00:12:11.226 "data_size": 63488 00:12:11.226 }, 00:12:11.226 { 00:12:11.226 "name": "pt3", 00:12:11.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.226 "is_configured": true, 00:12:11.226 "data_offset": 2048, 00:12:11.226 "data_size": 63488 00:12:11.226 } 00:12:11.226 ] 00:12:11.226 }' 00:12:11.226 10:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.226 10:19:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:11.794 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:12.054 [2024-07-15 10:19:36.624069] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.054 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:12.054 "name": "raid_bdev1", 00:12:12.054 "aliases": [ 00:12:12.054 "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0" 00:12:12.054 ], 00:12:12.054 "product_name": "Raid Volume", 00:12:12.054 "block_size": 512, 00:12:12.054 "num_blocks": 190464, 00:12:12.054 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:12.054 "assigned_rate_limits": { 00:12:12.054 "rw_ios_per_sec": 0, 00:12:12.054 "rw_mbytes_per_sec": 0, 00:12:12.054 "r_mbytes_per_sec": 0, 00:12:12.054 "w_mbytes_per_sec": 0 00:12:12.054 }, 00:12:12.054 "claimed": false, 00:12:12.054 "zoned": false, 00:12:12.054 "supported_io_types": { 00:12:12.054 "read": true, 00:12:12.054 "write": true, 00:12:12.054 "unmap": true, 00:12:12.054 "flush": true, 00:12:12.054 "reset": true, 00:12:12.054 "nvme_admin": false, 00:12:12.054 "nvme_io": false, 00:12:12.054 "nvme_io_md": false, 00:12:12.054 "write_zeroes": true, 00:12:12.054 "zcopy": false, 00:12:12.054 "get_zone_info": false, 00:12:12.054 "zone_management": false, 00:12:12.054 "zone_append": false, 00:12:12.054 "compare": false, 00:12:12.054 "compare_and_write": false, 00:12:12.054 "abort": false, 00:12:12.054 "seek_hole": false, 00:12:12.054 "seek_data": false, 00:12:12.054 "copy": false, 00:12:12.054 "nvme_iov_md": false 00:12:12.054 }, 00:12:12.054 "memory_domains": [ 00:12:12.054 { 00:12:12.054 "dma_device_id": "system", 00:12:12.054 "dma_device_type": 1 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.054 "dma_device_type": 2 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "dma_device_id": "system", 00:12:12.054 "dma_device_type": 1 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.054 "dma_device_type": 2 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "dma_device_id": "system", 00:12:12.054 "dma_device_type": 1 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.054 "dma_device_type": 2 00:12:12.054 } 00:12:12.054 ], 00:12:12.054 "driver_specific": { 00:12:12.054 "raid": { 00:12:12.054 "uuid": "8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0", 00:12:12.054 "strip_size_kb": 64, 00:12:12.054 "state": "online", 00:12:12.054 "raid_level": "raid0", 00:12:12.054 "superblock": true, 00:12:12.054 "num_base_bdevs": 3, 00:12:12.054 "num_base_bdevs_discovered": 3, 00:12:12.054 "num_base_bdevs_operational": 3, 00:12:12.054 "base_bdevs_list": [ 00:12:12.054 { 00:12:12.054 "name": "pt1", 00:12:12.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.054 "is_configured": true, 00:12:12.054 "data_offset": 2048, 00:12:12.054 "data_size": 63488 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "name": "pt2", 00:12:12.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.054 "is_configured": true, 00:12:12.054 "data_offset": 2048, 00:12:12.054 "data_size": 63488 00:12:12.054 }, 00:12:12.054 { 00:12:12.054 "name": "pt3", 00:12:12.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.054 "is_configured": true, 00:12:12.054 "data_offset": 2048, 00:12:12.054 "data_size": 63488 00:12:12.054 } 00:12:12.054 ] 00:12:12.054 } 00:12:12.054 } 00:12:12.054 }' 00:12:12.054 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.054 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:12.054 pt2 00:12:12.054 pt3' 00:12:12.054 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:12.054 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:12.054 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:12.313 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:12.313 "name": "pt1", 00:12:12.313 "aliases": [ 00:12:12.313 "00000000-0000-0000-0000-000000000001" 00:12:12.313 ], 00:12:12.313 "product_name": "passthru", 00:12:12.313 "block_size": 512, 00:12:12.313 "num_blocks": 65536, 00:12:12.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.313 "assigned_rate_limits": { 00:12:12.313 "rw_ios_per_sec": 0, 00:12:12.313 "rw_mbytes_per_sec": 0, 00:12:12.313 "r_mbytes_per_sec": 0, 00:12:12.313 "w_mbytes_per_sec": 0 00:12:12.313 }, 00:12:12.313 "claimed": true, 00:12:12.313 "claim_type": "exclusive_write", 00:12:12.313 "zoned": false, 00:12:12.313 "supported_io_types": { 00:12:12.313 "read": true, 00:12:12.313 "write": true, 00:12:12.313 "unmap": true, 00:12:12.313 "flush": true, 00:12:12.313 "reset": true, 00:12:12.313 "nvme_admin": false, 00:12:12.313 "nvme_io": false, 00:12:12.313 "nvme_io_md": false, 00:12:12.313 "write_zeroes": true, 00:12:12.313 "zcopy": true, 00:12:12.313 "get_zone_info": false, 00:12:12.313 "zone_management": false, 00:12:12.313 "zone_append": false, 00:12:12.313 "compare": false, 00:12:12.313 "compare_and_write": false, 00:12:12.313 "abort": true, 00:12:12.313 "seek_hole": false, 00:12:12.313 "seek_data": false, 00:12:12.313 "copy": true, 00:12:12.313 "nvme_iov_md": false 00:12:12.313 }, 00:12:12.313 "memory_domains": [ 00:12:12.313 { 00:12:12.313 "dma_device_id": "system", 00:12:12.313 "dma_device_type": 1 00:12:12.313 }, 00:12:12.313 { 00:12:12.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.313 "dma_device_type": 2 00:12:12.313 } 00:12:12.313 ], 00:12:12.313 "driver_specific": { 00:12:12.313 "passthru": { 00:12:12.313 "name": "pt1", 00:12:12.313 "base_bdev_name": "malloc1" 00:12:12.313 } 00:12:12.313 } 00:12:12.313 }' 00:12:12.313 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.313 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.313 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:12.313 10:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.313 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.313 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:12.313 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.313 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:12.572 "name": "pt2", 00:12:12.572 "aliases": [ 00:12:12.572 "00000000-0000-0000-0000-000000000002" 00:12:12.572 ], 00:12:12.572 "product_name": "passthru", 00:12:12.572 "block_size": 512, 00:12:12.572 "num_blocks": 65536, 00:12:12.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.572 "assigned_rate_limits": { 00:12:12.572 "rw_ios_per_sec": 0, 00:12:12.572 "rw_mbytes_per_sec": 0, 00:12:12.572 "r_mbytes_per_sec": 0, 00:12:12.572 "w_mbytes_per_sec": 0 00:12:12.572 }, 00:12:12.572 "claimed": true, 00:12:12.572 "claim_type": "exclusive_write", 00:12:12.572 "zoned": false, 00:12:12.572 "supported_io_types": { 00:12:12.572 "read": true, 00:12:12.572 "write": true, 00:12:12.572 "unmap": true, 00:12:12.572 "flush": true, 00:12:12.572 "reset": true, 00:12:12.572 "nvme_admin": false, 00:12:12.572 "nvme_io": false, 00:12:12.572 "nvme_io_md": false, 00:12:12.572 "write_zeroes": true, 00:12:12.572 "zcopy": true, 00:12:12.572 "get_zone_info": false, 00:12:12.572 "zone_management": false, 00:12:12.572 "zone_append": false, 00:12:12.572 "compare": false, 00:12:12.572 "compare_and_write": false, 00:12:12.572 "abort": true, 00:12:12.572 "seek_hole": false, 00:12:12.572 "seek_data": false, 00:12:12.572 "copy": true, 00:12:12.572 "nvme_iov_md": false 00:12:12.572 }, 00:12:12.572 "memory_domains": [ 00:12:12.572 { 00:12:12.572 "dma_device_id": "system", 00:12:12.572 "dma_device_type": 1 00:12:12.572 }, 00:12:12.572 { 00:12:12.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.572 "dma_device_type": 2 00:12:12.572 } 00:12:12.572 ], 00:12:12.572 "driver_specific": { 00:12:12.572 "passthru": { 00:12:12.572 "name": "pt2", 00:12:12.572 "base_bdev_name": "malloc2" 00:12:12.572 } 00:12:12.572 } 00:12:12.572 }' 00:12:12.572 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:12.830 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:13.088 "name": "pt3", 00:12:13.088 "aliases": [ 00:12:13.088 "00000000-0000-0000-0000-000000000003" 00:12:13.088 ], 00:12:13.088 "product_name": "passthru", 00:12:13.088 "block_size": 512, 00:12:13.088 "num_blocks": 65536, 00:12:13.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.088 "assigned_rate_limits": { 00:12:13.088 "rw_ios_per_sec": 0, 00:12:13.088 "rw_mbytes_per_sec": 0, 00:12:13.088 "r_mbytes_per_sec": 0, 00:12:13.088 "w_mbytes_per_sec": 0 00:12:13.088 }, 00:12:13.088 "claimed": true, 00:12:13.088 "claim_type": "exclusive_write", 00:12:13.088 "zoned": false, 00:12:13.088 "supported_io_types": { 00:12:13.088 "read": true, 00:12:13.088 "write": true, 00:12:13.088 "unmap": true, 00:12:13.088 "flush": true, 00:12:13.088 "reset": true, 00:12:13.088 "nvme_admin": false, 00:12:13.088 "nvme_io": false, 00:12:13.088 "nvme_io_md": false, 00:12:13.088 "write_zeroes": true, 00:12:13.088 "zcopy": true, 00:12:13.088 "get_zone_info": false, 00:12:13.088 "zone_management": false, 00:12:13.088 "zone_append": false, 00:12:13.088 "compare": false, 00:12:13.088 "compare_and_write": false, 00:12:13.088 "abort": true, 00:12:13.088 "seek_hole": false, 00:12:13.088 "seek_data": false, 00:12:13.088 "copy": true, 00:12:13.088 "nvme_iov_md": false 00:12:13.088 }, 00:12:13.088 "memory_domains": [ 00:12:13.088 { 00:12:13.088 "dma_device_id": "system", 00:12:13.088 "dma_device_type": 1 00:12:13.088 }, 00:12:13.088 { 00:12:13.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.088 "dma_device_type": 2 00:12:13.088 } 00:12:13.088 ], 00:12:13.088 "driver_specific": { 00:12:13.088 "passthru": { 00:12:13.088 "name": "pt3", 00:12:13.088 "base_bdev_name": "malloc3" 00:12:13.088 } 00:12:13.088 } 00:12:13.088 }' 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.088 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:13.346 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:13.346 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.346 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:13.346 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:13.346 10:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.346 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:13.346 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:13.346 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.346 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:13.346 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:13.604 [2024-07-15 10:19:38.284337] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0 '!=' 8d8e591c-6a4d-4e53-952d-5e6f73a0c4e0 ']' 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1772581 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1772581 ']' 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1772581 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1772581 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1772581' 00:12:13.604 killing process with pid 1772581 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1772581 00:12:13.604 [2024-07-15 10:19:38.366039] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.604 [2024-07-15 10:19:38.366081] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.604 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1772581 00:12:13.604 [2024-07-15 10:19:38.366117] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.604 [2024-07-15 10:19:38.366125] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be5530 name raid_bdev1, state offline 00:12:13.604 [2024-07-15 10:19:38.388517] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.863 10:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:13.863 00:12:13.863 real 0m10.706s 00:12:13.863 user 0m19.113s 00:12:13.863 sys 0m2.070s 00:12:13.863 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.863 10:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.863 ************************************ 00:12:13.863 END TEST raid_superblock_test 00:12:13.863 ************************************ 00:12:13.863 10:19:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:13.863 10:19:38 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:13.863 10:19:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:13.863 10:19:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.863 10:19:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.863 ************************************ 00:12:13.863 START TEST raid_read_error_test 00:12:13.863 ************************************ 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XRmUQQsZkv 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1774675 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1774675 /var/tmp/spdk-raid.sock 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1774675 ']' 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:13.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.863 10:19:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:14.121 [2024-07-15 10:19:38.697408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:14.121 [2024-07-15 10:19:38.697454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774675 ] 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:14.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:14.121 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:14.121 [2024-07-15 10:19:38.790398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.121 [2024-07-15 10:19:38.864250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.380 [2024-07-15 10:19:38.913462] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.380 [2024-07-15 10:19:38.913485] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.945 10:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.945 10:19:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:14.945 10:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:14.945 10:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.945 BaseBdev1_malloc 00:12:14.945 10:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:15.204 true 00:12:15.204 10:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:15.204 [2024-07-15 10:19:39.941434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:15.204 [2024-07-15 10:19:39.941467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.204 [2024-07-15 10:19:39.941482] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2314190 00:12:15.204 [2024-07-15 10:19:39.941490] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.204 [2024-07-15 10:19:39.942662] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.204 [2024-07-15 10:19:39.942684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:15.204 BaseBdev1 00:12:15.204 10:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:15.204 10:19:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:15.463 BaseBdev2_malloc 00:12:15.463 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:15.721 true 00:12:15.721 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:15.721 [2024-07-15 10:19:40.450573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:15.721 [2024-07-15 10:19:40.450604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.721 [2024-07-15 10:19:40.450618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2318e20 00:12:15.721 [2024-07-15 10:19:40.450626] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.721 [2024-07-15 10:19:40.451555] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.721 [2024-07-15 10:19:40.451577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:15.721 BaseBdev2 00:12:15.721 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:15.721 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:15.980 BaseBdev3_malloc 00:12:15.980 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:16.238 true 00:12:16.238 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:16.238 [2024-07-15 10:19:40.971336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:16.238 [2024-07-15 10:19:40.971367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.238 [2024-07-15 10:19:40.971383] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2319d90 00:12:16.238 [2024-07-15 10:19:40.971392] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.238 [2024-07-15 10:19:40.972419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.238 [2024-07-15 10:19:40.972440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:16.238 BaseBdev3 00:12:16.239 10:19:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:16.497 [2024-07-15 10:19:41.127761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.497 [2024-07-15 10:19:41.128614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.497 [2024-07-15 10:19:41.128659] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.497 [2024-07-15 10:19:41.128788] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x231bba0 00:12:16.497 [2024-07-15 10:19:41.128795] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:16.497 [2024-07-15 10:19:41.128921] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x216fab0 00:12:16.497 [2024-07-15 10:19:41.129017] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x231bba0 00:12:16.497 [2024-07-15 10:19:41.129023] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x231bba0 00:12:16.497 [2024-07-15 10:19:41.129086] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.497 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.756 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.756 "name": "raid_bdev1", 00:12:16.756 "uuid": "cc0b386b-4515-4e4a-b8a3-4b088b44e6d8", 00:12:16.756 "strip_size_kb": 64, 00:12:16.756 "state": "online", 00:12:16.756 "raid_level": "raid0", 00:12:16.756 "superblock": true, 00:12:16.756 "num_base_bdevs": 3, 00:12:16.756 "num_base_bdevs_discovered": 3, 00:12:16.756 "num_base_bdevs_operational": 3, 00:12:16.756 "base_bdevs_list": [ 00:12:16.756 { 00:12:16.756 "name": "BaseBdev1", 00:12:16.756 "uuid": "caa9b4bc-f947-561a-b2cb-3953d4a1df63", 00:12:16.756 "is_configured": true, 00:12:16.756 "data_offset": 2048, 00:12:16.756 "data_size": 63488 00:12:16.756 }, 00:12:16.756 { 00:12:16.756 "name": "BaseBdev2", 00:12:16.756 "uuid": "8f74d198-4006-5fe4-8e4e-d212612f8b85", 00:12:16.756 "is_configured": true, 00:12:16.756 "data_offset": 2048, 00:12:16.756 "data_size": 63488 00:12:16.756 }, 00:12:16.756 { 00:12:16.756 "name": "BaseBdev3", 00:12:16.756 "uuid": "85577a15-81f3-59e7-99b7-5bae29562966", 00:12:16.756 "is_configured": true, 00:12:16.756 "data_offset": 2048, 00:12:16.756 "data_size": 63488 00:12:16.756 } 00:12:16.756 ] 00:12:16.756 }' 00:12:16.756 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.756 10:19:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.015 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:17.015 10:19:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:17.274 [2024-07-15 10:19:41.881916] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e6e6c0 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.209 10:19:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.468 10:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.468 "name": "raid_bdev1", 00:12:18.468 "uuid": "cc0b386b-4515-4e4a-b8a3-4b088b44e6d8", 00:12:18.468 "strip_size_kb": 64, 00:12:18.468 "state": "online", 00:12:18.468 "raid_level": "raid0", 00:12:18.468 "superblock": true, 00:12:18.468 "num_base_bdevs": 3, 00:12:18.468 "num_base_bdevs_discovered": 3, 00:12:18.468 "num_base_bdevs_operational": 3, 00:12:18.468 "base_bdevs_list": [ 00:12:18.468 { 00:12:18.468 "name": "BaseBdev1", 00:12:18.468 "uuid": "caa9b4bc-f947-561a-b2cb-3953d4a1df63", 00:12:18.468 "is_configured": true, 00:12:18.468 "data_offset": 2048, 00:12:18.468 "data_size": 63488 00:12:18.468 }, 00:12:18.468 { 00:12:18.468 "name": "BaseBdev2", 00:12:18.468 "uuid": "8f74d198-4006-5fe4-8e4e-d212612f8b85", 00:12:18.468 "is_configured": true, 00:12:18.468 "data_offset": 2048, 00:12:18.468 "data_size": 63488 00:12:18.468 }, 00:12:18.468 { 00:12:18.468 "name": "BaseBdev3", 00:12:18.468 "uuid": "85577a15-81f3-59e7-99b7-5bae29562966", 00:12:18.468 "is_configured": true, 00:12:18.468 "data_offset": 2048, 00:12:18.468 "data_size": 63488 00:12:18.468 } 00:12:18.468 ] 00:12:18.468 }' 00:12:18.468 10:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.468 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:19.035 [2024-07-15 10:19:43.777635] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.035 [2024-07-15 10:19:43.777664] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.035 [2024-07-15 10:19:43.779653] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.035 [2024-07-15 10:19:43.779680] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.035 [2024-07-15 10:19:43.779703] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.035 [2024-07-15 10:19:43.779710] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x231bba0 name raid_bdev1, state offline 00:12:19.035 0 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1774675 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1774675 ']' 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1774675 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.035 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774675 00:12:19.294 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.294 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.294 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774675' 00:12:19.294 killing process with pid 1774675 00:12:19.294 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1774675 00:12:19.294 [2024-07-15 10:19:43.850119] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.294 10:19:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1774675 00:12:19.294 [2024-07-15 10:19:43.867214] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.294 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XRmUQQsZkv 00:12:19.294 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:12:19.295 00:12:19.295 real 0m5.425s 00:12:19.295 user 0m8.279s 00:12:19.295 sys 0m0.949s 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.295 10:19:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.295 ************************************ 00:12:19.295 END TEST raid_read_error_test 00:12:19.295 ************************************ 00:12:19.553 10:19:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:19.553 10:19:44 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:19.553 10:19:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:19.553 10:19:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.553 10:19:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.553 ************************************ 00:12:19.553 START TEST raid_write_error_test 00:12:19.553 ************************************ 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ver2mHlOpA 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1775681 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1775681 /var/tmp/spdk-raid.sock 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1775681 ']' 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:19.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.554 10:19:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.554 [2024-07-15 10:19:44.212789] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:19.554 [2024-07-15 10:19:44.212839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775681 ] 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:19.554 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.554 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:19.554 [2024-07-15 10:19:44.302798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.813 [2024-07-15 10:19:44.373739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.813 [2024-07-15 10:19:44.426933] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.813 [2024-07-15 10:19:44.426957] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.378 10:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.378 10:19:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:20.378 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:20.378 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.378 BaseBdev1_malloc 00:12:20.636 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:20.636 true 00:12:20.636 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:20.894 [2024-07-15 10:19:45.499395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:20.895 [2024-07-15 10:19:45.499433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.895 [2024-07-15 10:19:45.499446] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1389190 00:12:20.895 [2024-07-15 10:19:45.499455] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.895 [2024-07-15 10:19:45.500591] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.895 [2024-07-15 10:19:45.500614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.895 BaseBdev1 00:12:20.895 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:20.895 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.895 BaseBdev2_malloc 00:12:21.152 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:21.152 true 00:12:21.152 10:19:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:21.423 [2024-07-15 10:19:46.008240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:21.423 [2024-07-15 10:19:46.008273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.423 [2024-07-15 10:19:46.008285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x138de20 00:12:21.423 [2024-07-15 10:19:46.008292] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.423 [2024-07-15 10:19:46.009199] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.423 [2024-07-15 10:19:46.009221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:21.423 BaseBdev2 00:12:21.423 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:21.423 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:21.423 BaseBdev3_malloc 00:12:21.693 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:21.693 true 00:12:21.693 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:21.952 [2024-07-15 10:19:46.541345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:21.952 [2024-07-15 10:19:46.541377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.952 [2024-07-15 10:19:46.541390] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x138ed90 00:12:21.952 [2024-07-15 10:19:46.541398] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.952 [2024-07-15 10:19:46.542318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.952 [2024-07-15 10:19:46.542351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:21.952 BaseBdev3 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:21.952 [2024-07-15 10:19:46.709798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.952 [2024-07-15 10:19:46.710626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.952 [2024-07-15 10:19:46.710675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:21.952 [2024-07-15 10:19:46.710807] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1390ba0 00:12:21.952 [2024-07-15 10:19:46.710814] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:21.952 [2024-07-15 10:19:46.710938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e4ab0 00:12:21.952 [2024-07-15 10:19:46.711035] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1390ba0 00:12:21.952 [2024-07-15 10:19:46.711042] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1390ba0 00:12:21.952 [2024-07-15 10:19:46.711106] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.952 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.267 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.267 "name": "raid_bdev1", 00:12:22.267 "uuid": "159a49a7-ec91-4f72-8bed-e1f3a565f6d2", 00:12:22.267 "strip_size_kb": 64, 00:12:22.267 "state": "online", 00:12:22.267 "raid_level": "raid0", 00:12:22.267 "superblock": true, 00:12:22.267 "num_base_bdevs": 3, 00:12:22.267 "num_base_bdevs_discovered": 3, 00:12:22.267 "num_base_bdevs_operational": 3, 00:12:22.267 "base_bdevs_list": [ 00:12:22.267 { 00:12:22.267 "name": "BaseBdev1", 00:12:22.267 "uuid": "9e4681d9-a3ad-5c2c-9daa-1f51b1930e06", 00:12:22.267 "is_configured": true, 00:12:22.267 "data_offset": 2048, 00:12:22.267 "data_size": 63488 00:12:22.267 }, 00:12:22.267 { 00:12:22.267 "name": "BaseBdev2", 00:12:22.267 "uuid": "715955f6-5a35-5bc0-8e40-4e7df31feab4", 00:12:22.267 "is_configured": true, 00:12:22.267 "data_offset": 2048, 00:12:22.267 "data_size": 63488 00:12:22.267 }, 00:12:22.267 { 00:12:22.267 "name": "BaseBdev3", 00:12:22.267 "uuid": "9c0beaab-d354-5473-a252-b01f81f13358", 00:12:22.267 "is_configured": true, 00:12:22.267 "data_offset": 2048, 00:12:22.267 "data_size": 63488 00:12:22.267 } 00:12:22.267 ] 00:12:22.267 }' 00:12:22.267 10:19:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.267 10:19:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.835 10:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:22.835 10:19:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:22.835 [2024-07-15 10:19:47.452104] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xee36c0 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.772 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.030 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.030 "name": "raid_bdev1", 00:12:24.030 "uuid": "159a49a7-ec91-4f72-8bed-e1f3a565f6d2", 00:12:24.030 "strip_size_kb": 64, 00:12:24.030 "state": "online", 00:12:24.030 "raid_level": "raid0", 00:12:24.030 "superblock": true, 00:12:24.030 "num_base_bdevs": 3, 00:12:24.030 "num_base_bdevs_discovered": 3, 00:12:24.030 "num_base_bdevs_operational": 3, 00:12:24.030 "base_bdevs_list": [ 00:12:24.030 { 00:12:24.030 "name": "BaseBdev1", 00:12:24.030 "uuid": "9e4681d9-a3ad-5c2c-9daa-1f51b1930e06", 00:12:24.030 "is_configured": true, 00:12:24.030 "data_offset": 2048, 00:12:24.030 "data_size": 63488 00:12:24.030 }, 00:12:24.030 { 00:12:24.030 "name": "BaseBdev2", 00:12:24.030 "uuid": "715955f6-5a35-5bc0-8e40-4e7df31feab4", 00:12:24.030 "is_configured": true, 00:12:24.030 "data_offset": 2048, 00:12:24.030 "data_size": 63488 00:12:24.030 }, 00:12:24.030 { 00:12:24.030 "name": "BaseBdev3", 00:12:24.030 "uuid": "9c0beaab-d354-5473-a252-b01f81f13358", 00:12:24.030 "is_configured": true, 00:12:24.030 "data_offset": 2048, 00:12:24.030 "data_size": 63488 00:12:24.030 } 00:12:24.030 ] 00:12:24.030 }' 00:12:24.030 10:19:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.030 10:19:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.598 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:24.598 [2024-07-15 10:19:49.367713] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.598 [2024-07-15 10:19:49.367751] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.598 [2024-07-15 10:19:49.369711] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.598 [2024-07-15 10:19:49.369737] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.598 [2024-07-15 10:19:49.369758] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.598 [2024-07-15 10:19:49.369766] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1390ba0 name raid_bdev1, state offline 00:12:24.598 0 00:12:24.598 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1775681 00:12:24.598 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1775681 ']' 00:12:24.598 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1775681 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1775681 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1775681' 00:12:24.857 killing process with pid 1775681 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1775681 00:12:24.857 [2024-07-15 10:19:49.444023] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.857 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1775681 00:12:24.857 [2024-07-15 10:19:49.461302] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ver2mHlOpA 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:12:25.116 00:12:25.116 real 0m5.513s 00:12:25.116 user 0m8.395s 00:12:25.116 sys 0m0.962s 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:25.116 10:19:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.116 ************************************ 00:12:25.116 END TEST raid_write_error_test 00:12:25.116 ************************************ 00:12:25.116 10:19:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:25.116 10:19:49 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:25.116 10:19:49 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:25.116 10:19:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:25.116 10:19:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.116 10:19:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.116 ************************************ 00:12:25.116 START TEST raid_state_function_test 00:12:25.116 ************************************ 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1776825 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1776825' 00:12:25.116 Process raid pid: 1776825 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1776825 /var/tmp/spdk-raid.sock 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1776825 ']' 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:25.116 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.117 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:25.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:25.117 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.117 10:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.117 [2024-07-15 10:19:49.804923] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:25.117 [2024-07-15 10:19:49.804970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:25.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.117 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:25.117 [2024-07-15 10:19:49.896732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.375 [2024-07-15 10:19:49.967379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.375 [2024-07-15 10:19:50.034816] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.375 [2024-07-15 10:19:50.034839] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.941 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.941 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:25.941 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:26.199 [2024-07-15 10:19:50.759001] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:26.199 [2024-07-15 10:19:50.759035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:26.199 [2024-07-15 10:19:50.759043] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:26.199 [2024-07-15 10:19:50.759051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:26.199 [2024-07-15 10:19:50.759057] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:26.199 [2024-07-15 10:19:50.759064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.199 "name": "Existed_Raid", 00:12:26.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.199 "strip_size_kb": 64, 00:12:26.199 "state": "configuring", 00:12:26.199 "raid_level": "concat", 00:12:26.199 "superblock": false, 00:12:26.199 "num_base_bdevs": 3, 00:12:26.199 "num_base_bdevs_discovered": 0, 00:12:26.199 "num_base_bdevs_operational": 3, 00:12:26.199 "base_bdevs_list": [ 00:12:26.199 { 00:12:26.199 "name": "BaseBdev1", 00:12:26.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.199 "is_configured": false, 00:12:26.199 "data_offset": 0, 00:12:26.199 "data_size": 0 00:12:26.199 }, 00:12:26.199 { 00:12:26.199 "name": "BaseBdev2", 00:12:26.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.199 "is_configured": false, 00:12:26.199 "data_offset": 0, 00:12:26.199 "data_size": 0 00:12:26.199 }, 00:12:26.199 { 00:12:26.199 "name": "BaseBdev3", 00:12:26.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.199 "is_configured": false, 00:12:26.199 "data_offset": 0, 00:12:26.199 "data_size": 0 00:12:26.199 } 00:12:26.199 ] 00:12:26.199 }' 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.199 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.763 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:27.021 [2024-07-15 10:19:51.617118] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:27.021 [2024-07-15 10:19:51.617139] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22c0f40 name Existed_Raid, state configuring 00:12:27.021 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:27.021 [2024-07-15 10:19:51.785565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.021 [2024-07-15 10:19:51.785586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.021 [2024-07-15 10:19:51.785592] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.021 [2024-07-15 10:19:51.785599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.021 [2024-07-15 10:19:51.785605] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:27.021 [2024-07-15 10:19:51.785612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:27.021 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:27.279 [2024-07-15 10:19:51.962675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.279 BaseBdev1 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:27.279 10:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:27.537 [ 00:12:27.537 { 00:12:27.537 "name": "BaseBdev1", 00:12:27.537 "aliases": [ 00:12:27.537 "939d5ad7-1af3-4012-93fd-5dc134125982" 00:12:27.537 ], 00:12:27.537 "product_name": "Malloc disk", 00:12:27.537 "block_size": 512, 00:12:27.537 "num_blocks": 65536, 00:12:27.537 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:27.537 "assigned_rate_limits": { 00:12:27.537 "rw_ios_per_sec": 0, 00:12:27.537 "rw_mbytes_per_sec": 0, 00:12:27.537 "r_mbytes_per_sec": 0, 00:12:27.537 "w_mbytes_per_sec": 0 00:12:27.537 }, 00:12:27.537 "claimed": true, 00:12:27.537 "claim_type": "exclusive_write", 00:12:27.537 "zoned": false, 00:12:27.537 "supported_io_types": { 00:12:27.537 "read": true, 00:12:27.537 "write": true, 00:12:27.537 "unmap": true, 00:12:27.537 "flush": true, 00:12:27.537 "reset": true, 00:12:27.537 "nvme_admin": false, 00:12:27.537 "nvme_io": false, 00:12:27.537 "nvme_io_md": false, 00:12:27.537 "write_zeroes": true, 00:12:27.537 "zcopy": true, 00:12:27.537 "get_zone_info": false, 00:12:27.537 "zone_management": false, 00:12:27.537 "zone_append": false, 00:12:27.537 "compare": false, 00:12:27.537 "compare_and_write": false, 00:12:27.537 "abort": true, 00:12:27.537 "seek_hole": false, 00:12:27.537 "seek_data": false, 00:12:27.537 "copy": true, 00:12:27.537 "nvme_iov_md": false 00:12:27.537 }, 00:12:27.537 "memory_domains": [ 00:12:27.537 { 00:12:27.537 "dma_device_id": "system", 00:12:27.537 "dma_device_type": 1 00:12:27.537 }, 00:12:27.537 { 00:12:27.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.537 "dma_device_type": 2 00:12:27.537 } 00:12:27.537 ], 00:12:27.537 "driver_specific": {} 00:12:27.537 } 00:12:27.537 ] 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.537 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.795 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.795 "name": "Existed_Raid", 00:12:27.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.795 "strip_size_kb": 64, 00:12:27.795 "state": "configuring", 00:12:27.795 "raid_level": "concat", 00:12:27.795 "superblock": false, 00:12:27.795 "num_base_bdevs": 3, 00:12:27.795 "num_base_bdevs_discovered": 1, 00:12:27.795 "num_base_bdevs_operational": 3, 00:12:27.795 "base_bdevs_list": [ 00:12:27.795 { 00:12:27.795 "name": "BaseBdev1", 00:12:27.795 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:27.795 "is_configured": true, 00:12:27.795 "data_offset": 0, 00:12:27.795 "data_size": 65536 00:12:27.795 }, 00:12:27.795 { 00:12:27.795 "name": "BaseBdev2", 00:12:27.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.795 "is_configured": false, 00:12:27.795 "data_offset": 0, 00:12:27.795 "data_size": 0 00:12:27.795 }, 00:12:27.795 { 00:12:27.795 "name": "BaseBdev3", 00:12:27.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.795 "is_configured": false, 00:12:27.795 "data_offset": 0, 00:12:27.795 "data_size": 0 00:12:27.795 } 00:12:27.795 ] 00:12:27.795 }' 00:12:27.795 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.795 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.359 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:28.359 [2024-07-15 10:19:53.133687] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.359 [2024-07-15 10:19:53.133717] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22c0810 name Existed_Raid, state configuring 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:28.616 [2024-07-15 10:19:53.302171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.616 [2024-07-15 10:19:53.303251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.616 [2024-07-15 10:19:53.303279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.616 [2024-07-15 10:19:53.303286] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.616 [2024-07-15 10:19:53.303293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.616 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.872 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.872 "name": "Existed_Raid", 00:12:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.872 "strip_size_kb": 64, 00:12:28.872 "state": "configuring", 00:12:28.872 "raid_level": "concat", 00:12:28.872 "superblock": false, 00:12:28.872 "num_base_bdevs": 3, 00:12:28.872 "num_base_bdevs_discovered": 1, 00:12:28.872 "num_base_bdevs_operational": 3, 00:12:28.872 "base_bdevs_list": [ 00:12:28.872 { 00:12:28.872 "name": "BaseBdev1", 00:12:28.872 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:28.872 "is_configured": true, 00:12:28.872 "data_offset": 0, 00:12:28.872 "data_size": 65536 00:12:28.872 }, 00:12:28.872 { 00:12:28.872 "name": "BaseBdev2", 00:12:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.872 "is_configured": false, 00:12:28.872 "data_offset": 0, 00:12:28.872 "data_size": 0 00:12:28.872 }, 00:12:28.872 { 00:12:28.872 "name": "BaseBdev3", 00:12:28.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.872 "is_configured": false, 00:12:28.872 "data_offset": 0, 00:12:28.872 "data_size": 0 00:12:28.872 } 00:12:28.872 ] 00:12:28.872 }' 00:12:28.872 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.872 10:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.436 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:29.436 [2024-07-15 10:19:54.119075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.436 BaseBdev2 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:29.436 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:29.693 [ 00:12:29.693 { 00:12:29.693 "name": "BaseBdev2", 00:12:29.693 "aliases": [ 00:12:29.693 "db171e45-135e-456a-a877-06affc89fd96" 00:12:29.693 ], 00:12:29.693 "product_name": "Malloc disk", 00:12:29.693 "block_size": 512, 00:12:29.693 "num_blocks": 65536, 00:12:29.693 "uuid": "db171e45-135e-456a-a877-06affc89fd96", 00:12:29.693 "assigned_rate_limits": { 00:12:29.693 "rw_ios_per_sec": 0, 00:12:29.693 "rw_mbytes_per_sec": 0, 00:12:29.693 "r_mbytes_per_sec": 0, 00:12:29.693 "w_mbytes_per_sec": 0 00:12:29.693 }, 00:12:29.693 "claimed": true, 00:12:29.693 "claim_type": "exclusive_write", 00:12:29.693 "zoned": false, 00:12:29.693 "supported_io_types": { 00:12:29.693 "read": true, 00:12:29.693 "write": true, 00:12:29.693 "unmap": true, 00:12:29.693 "flush": true, 00:12:29.693 "reset": true, 00:12:29.693 "nvme_admin": false, 00:12:29.693 "nvme_io": false, 00:12:29.693 "nvme_io_md": false, 00:12:29.693 "write_zeroes": true, 00:12:29.693 "zcopy": true, 00:12:29.693 "get_zone_info": false, 00:12:29.693 "zone_management": false, 00:12:29.693 "zone_append": false, 00:12:29.693 "compare": false, 00:12:29.693 "compare_and_write": false, 00:12:29.693 "abort": true, 00:12:29.693 "seek_hole": false, 00:12:29.693 "seek_data": false, 00:12:29.693 "copy": true, 00:12:29.693 "nvme_iov_md": false 00:12:29.693 }, 00:12:29.693 "memory_domains": [ 00:12:29.693 { 00:12:29.693 "dma_device_id": "system", 00:12:29.693 "dma_device_type": 1 00:12:29.693 }, 00:12:29.693 { 00:12:29.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.693 "dma_device_type": 2 00:12:29.693 } 00:12:29.693 ], 00:12:29.693 "driver_specific": {} 00:12:29.693 } 00:12:29.693 ] 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:29.693 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.951 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.951 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.951 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.951 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.951 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.951 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.951 "name": "Existed_Raid", 00:12:29.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.951 "strip_size_kb": 64, 00:12:29.951 "state": "configuring", 00:12:29.951 "raid_level": "concat", 00:12:29.951 "superblock": false, 00:12:29.951 "num_base_bdevs": 3, 00:12:29.951 "num_base_bdevs_discovered": 2, 00:12:29.951 "num_base_bdevs_operational": 3, 00:12:29.951 "base_bdevs_list": [ 00:12:29.951 { 00:12:29.951 "name": "BaseBdev1", 00:12:29.951 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:29.951 "is_configured": true, 00:12:29.951 "data_offset": 0, 00:12:29.951 "data_size": 65536 00:12:29.951 }, 00:12:29.951 { 00:12:29.951 "name": "BaseBdev2", 00:12:29.951 "uuid": "db171e45-135e-456a-a877-06affc89fd96", 00:12:29.951 "is_configured": true, 00:12:29.951 "data_offset": 0, 00:12:29.951 "data_size": 65536 00:12:29.951 }, 00:12:29.951 { 00:12:29.952 "name": "BaseBdev3", 00:12:29.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.952 "is_configured": false, 00:12:29.952 "data_offset": 0, 00:12:29.952 "data_size": 0 00:12:29.952 } 00:12:29.952 ] 00:12:29.952 }' 00:12:29.952 10:19:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.952 10:19:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.517 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:30.517 [2024-07-15 10:19:55.272815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.517 [2024-07-15 10:19:55.272844] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22c1700 00:12:30.517 [2024-07-15 10:19:55.272850] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:30.517 [2024-07-15 10:19:55.272981] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22c13d0 00:12:30.517 [2024-07-15 10:19:55.273066] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22c1700 00:12:30.518 [2024-07-15 10:19:55.273073] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x22c1700 00:12:30.518 [2024-07-15 10:19:55.273207] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.518 BaseBdev3 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:30.518 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:30.775 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.033 [ 00:12:31.033 { 00:12:31.033 "name": "BaseBdev3", 00:12:31.033 "aliases": [ 00:12:31.033 "0cc6c0dd-d738-48ea-9f6e-54200eccbddd" 00:12:31.033 ], 00:12:31.033 "product_name": "Malloc disk", 00:12:31.033 "block_size": 512, 00:12:31.033 "num_blocks": 65536, 00:12:31.033 "uuid": "0cc6c0dd-d738-48ea-9f6e-54200eccbddd", 00:12:31.033 "assigned_rate_limits": { 00:12:31.033 "rw_ios_per_sec": 0, 00:12:31.033 "rw_mbytes_per_sec": 0, 00:12:31.033 "r_mbytes_per_sec": 0, 00:12:31.033 "w_mbytes_per_sec": 0 00:12:31.033 }, 00:12:31.033 "claimed": true, 00:12:31.033 "claim_type": "exclusive_write", 00:12:31.033 "zoned": false, 00:12:31.033 "supported_io_types": { 00:12:31.033 "read": true, 00:12:31.033 "write": true, 00:12:31.033 "unmap": true, 00:12:31.033 "flush": true, 00:12:31.033 "reset": true, 00:12:31.033 "nvme_admin": false, 00:12:31.034 "nvme_io": false, 00:12:31.034 "nvme_io_md": false, 00:12:31.034 "write_zeroes": true, 00:12:31.034 "zcopy": true, 00:12:31.034 "get_zone_info": false, 00:12:31.034 "zone_management": false, 00:12:31.034 "zone_append": false, 00:12:31.034 "compare": false, 00:12:31.034 "compare_and_write": false, 00:12:31.034 "abort": true, 00:12:31.034 "seek_hole": false, 00:12:31.034 "seek_data": false, 00:12:31.034 "copy": true, 00:12:31.034 "nvme_iov_md": false 00:12:31.034 }, 00:12:31.034 "memory_domains": [ 00:12:31.034 { 00:12:31.034 "dma_device_id": "system", 00:12:31.034 "dma_device_type": 1 00:12:31.034 }, 00:12:31.034 { 00:12:31.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.034 "dma_device_type": 2 00:12:31.034 } 00:12:31.034 ], 00:12:31.034 "driver_specific": {} 00:12:31.034 } 00:12:31.034 ] 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.034 "name": "Existed_Raid", 00:12:31.034 "uuid": "c6f2b590-bcaf-4b52-a920-0eacf35e2478", 00:12:31.034 "strip_size_kb": 64, 00:12:31.034 "state": "online", 00:12:31.034 "raid_level": "concat", 00:12:31.034 "superblock": false, 00:12:31.034 "num_base_bdevs": 3, 00:12:31.034 "num_base_bdevs_discovered": 3, 00:12:31.034 "num_base_bdevs_operational": 3, 00:12:31.034 "base_bdevs_list": [ 00:12:31.034 { 00:12:31.034 "name": "BaseBdev1", 00:12:31.034 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:31.034 "is_configured": true, 00:12:31.034 "data_offset": 0, 00:12:31.034 "data_size": 65536 00:12:31.034 }, 00:12:31.034 { 00:12:31.034 "name": "BaseBdev2", 00:12:31.034 "uuid": "db171e45-135e-456a-a877-06affc89fd96", 00:12:31.034 "is_configured": true, 00:12:31.034 "data_offset": 0, 00:12:31.034 "data_size": 65536 00:12:31.034 }, 00:12:31.034 { 00:12:31.034 "name": "BaseBdev3", 00:12:31.034 "uuid": "0cc6c0dd-d738-48ea-9f6e-54200eccbddd", 00:12:31.034 "is_configured": true, 00:12:31.034 "data_offset": 0, 00:12:31.034 "data_size": 65536 00:12:31.034 } 00:12:31.034 ] 00:12:31.034 }' 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.034 10:19:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:31.599 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:31.858 [2024-07-15 10:19:56.452047] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.858 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:31.858 "name": "Existed_Raid", 00:12:31.858 "aliases": [ 00:12:31.858 "c6f2b590-bcaf-4b52-a920-0eacf35e2478" 00:12:31.858 ], 00:12:31.858 "product_name": "Raid Volume", 00:12:31.858 "block_size": 512, 00:12:31.858 "num_blocks": 196608, 00:12:31.858 "uuid": "c6f2b590-bcaf-4b52-a920-0eacf35e2478", 00:12:31.858 "assigned_rate_limits": { 00:12:31.858 "rw_ios_per_sec": 0, 00:12:31.858 "rw_mbytes_per_sec": 0, 00:12:31.858 "r_mbytes_per_sec": 0, 00:12:31.858 "w_mbytes_per_sec": 0 00:12:31.858 }, 00:12:31.858 "claimed": false, 00:12:31.858 "zoned": false, 00:12:31.858 "supported_io_types": { 00:12:31.858 "read": true, 00:12:31.858 "write": true, 00:12:31.858 "unmap": true, 00:12:31.858 "flush": true, 00:12:31.858 "reset": true, 00:12:31.858 "nvme_admin": false, 00:12:31.858 "nvme_io": false, 00:12:31.858 "nvme_io_md": false, 00:12:31.858 "write_zeroes": true, 00:12:31.858 "zcopy": false, 00:12:31.858 "get_zone_info": false, 00:12:31.858 "zone_management": false, 00:12:31.858 "zone_append": false, 00:12:31.858 "compare": false, 00:12:31.858 "compare_and_write": false, 00:12:31.858 "abort": false, 00:12:31.858 "seek_hole": false, 00:12:31.858 "seek_data": false, 00:12:31.858 "copy": false, 00:12:31.858 "nvme_iov_md": false 00:12:31.858 }, 00:12:31.858 "memory_domains": [ 00:12:31.858 { 00:12:31.858 "dma_device_id": "system", 00:12:31.858 "dma_device_type": 1 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.858 "dma_device_type": 2 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "dma_device_id": "system", 00:12:31.858 "dma_device_type": 1 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.858 "dma_device_type": 2 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "dma_device_id": "system", 00:12:31.858 "dma_device_type": 1 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.858 "dma_device_type": 2 00:12:31.858 } 00:12:31.858 ], 00:12:31.858 "driver_specific": { 00:12:31.858 "raid": { 00:12:31.858 "uuid": "c6f2b590-bcaf-4b52-a920-0eacf35e2478", 00:12:31.858 "strip_size_kb": 64, 00:12:31.858 "state": "online", 00:12:31.858 "raid_level": "concat", 00:12:31.858 "superblock": false, 00:12:31.858 "num_base_bdevs": 3, 00:12:31.858 "num_base_bdevs_discovered": 3, 00:12:31.858 "num_base_bdevs_operational": 3, 00:12:31.858 "base_bdevs_list": [ 00:12:31.858 { 00:12:31.858 "name": "BaseBdev1", 00:12:31.858 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:31.858 "is_configured": true, 00:12:31.858 "data_offset": 0, 00:12:31.858 "data_size": 65536 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "name": "BaseBdev2", 00:12:31.858 "uuid": "db171e45-135e-456a-a877-06affc89fd96", 00:12:31.858 "is_configured": true, 00:12:31.858 "data_offset": 0, 00:12:31.858 "data_size": 65536 00:12:31.858 }, 00:12:31.858 { 00:12:31.858 "name": "BaseBdev3", 00:12:31.858 "uuid": "0cc6c0dd-d738-48ea-9f6e-54200eccbddd", 00:12:31.858 "is_configured": true, 00:12:31.858 "data_offset": 0, 00:12:31.858 "data_size": 65536 00:12:31.858 } 00:12:31.858 ] 00:12:31.858 } 00:12:31.858 } 00:12:31.858 }' 00:12:31.858 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.858 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:31.858 BaseBdev2 00:12:31.858 BaseBdev3' 00:12:31.858 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:31.858 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:31.858 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:32.116 "name": "BaseBdev1", 00:12:32.116 "aliases": [ 00:12:32.116 "939d5ad7-1af3-4012-93fd-5dc134125982" 00:12:32.116 ], 00:12:32.116 "product_name": "Malloc disk", 00:12:32.116 "block_size": 512, 00:12:32.116 "num_blocks": 65536, 00:12:32.116 "uuid": "939d5ad7-1af3-4012-93fd-5dc134125982", 00:12:32.116 "assigned_rate_limits": { 00:12:32.116 "rw_ios_per_sec": 0, 00:12:32.116 "rw_mbytes_per_sec": 0, 00:12:32.116 "r_mbytes_per_sec": 0, 00:12:32.116 "w_mbytes_per_sec": 0 00:12:32.116 }, 00:12:32.116 "claimed": true, 00:12:32.116 "claim_type": "exclusive_write", 00:12:32.116 "zoned": false, 00:12:32.116 "supported_io_types": { 00:12:32.116 "read": true, 00:12:32.116 "write": true, 00:12:32.116 "unmap": true, 00:12:32.116 "flush": true, 00:12:32.116 "reset": true, 00:12:32.116 "nvme_admin": false, 00:12:32.116 "nvme_io": false, 00:12:32.116 "nvme_io_md": false, 00:12:32.116 "write_zeroes": true, 00:12:32.116 "zcopy": true, 00:12:32.116 "get_zone_info": false, 00:12:32.116 "zone_management": false, 00:12:32.116 "zone_append": false, 00:12:32.116 "compare": false, 00:12:32.116 "compare_and_write": false, 00:12:32.116 "abort": true, 00:12:32.116 "seek_hole": false, 00:12:32.116 "seek_data": false, 00:12:32.116 "copy": true, 00:12:32.116 "nvme_iov_md": false 00:12:32.116 }, 00:12:32.116 "memory_domains": [ 00:12:32.116 { 00:12:32.116 "dma_device_id": "system", 00:12:32.116 "dma_device_type": 1 00:12:32.116 }, 00:12:32.116 { 00:12:32.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.116 "dma_device_type": 2 00:12:32.116 } 00:12:32.116 ], 00:12:32.116 "driver_specific": {} 00:12:32.116 }' 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:32.116 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:32.374 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:32.374 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:32.374 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:32.374 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:32.374 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:32.374 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:32.374 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:32.633 "name": "BaseBdev2", 00:12:32.633 "aliases": [ 00:12:32.633 "db171e45-135e-456a-a877-06affc89fd96" 00:12:32.633 ], 00:12:32.633 "product_name": "Malloc disk", 00:12:32.633 "block_size": 512, 00:12:32.633 "num_blocks": 65536, 00:12:32.633 "uuid": "db171e45-135e-456a-a877-06affc89fd96", 00:12:32.633 "assigned_rate_limits": { 00:12:32.633 "rw_ios_per_sec": 0, 00:12:32.633 "rw_mbytes_per_sec": 0, 00:12:32.633 "r_mbytes_per_sec": 0, 00:12:32.633 "w_mbytes_per_sec": 0 00:12:32.633 }, 00:12:32.633 "claimed": true, 00:12:32.633 "claim_type": "exclusive_write", 00:12:32.633 "zoned": false, 00:12:32.633 "supported_io_types": { 00:12:32.633 "read": true, 00:12:32.633 "write": true, 00:12:32.633 "unmap": true, 00:12:32.633 "flush": true, 00:12:32.633 "reset": true, 00:12:32.633 "nvme_admin": false, 00:12:32.633 "nvme_io": false, 00:12:32.633 "nvme_io_md": false, 00:12:32.633 "write_zeroes": true, 00:12:32.633 "zcopy": true, 00:12:32.633 "get_zone_info": false, 00:12:32.633 "zone_management": false, 00:12:32.633 "zone_append": false, 00:12:32.633 "compare": false, 00:12:32.633 "compare_and_write": false, 00:12:32.633 "abort": true, 00:12:32.633 "seek_hole": false, 00:12:32.633 "seek_data": false, 00:12:32.633 "copy": true, 00:12:32.633 "nvme_iov_md": false 00:12:32.633 }, 00:12:32.633 "memory_domains": [ 00:12:32.633 { 00:12:32.633 "dma_device_id": "system", 00:12:32.633 "dma_device_type": 1 00:12:32.633 }, 00:12:32.633 { 00:12:32.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.633 "dma_device_type": 2 00:12:32.633 } 00:12:32.633 ], 00:12:32.633 "driver_specific": {} 00:12:32.633 }' 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:32.633 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:32.891 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:32.891 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:32.891 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:32.891 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:32.891 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:32.891 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:32.891 "name": "BaseBdev3", 00:12:32.891 "aliases": [ 00:12:32.891 "0cc6c0dd-d738-48ea-9f6e-54200eccbddd" 00:12:32.891 ], 00:12:32.891 "product_name": "Malloc disk", 00:12:32.891 "block_size": 512, 00:12:32.891 "num_blocks": 65536, 00:12:32.891 "uuid": "0cc6c0dd-d738-48ea-9f6e-54200eccbddd", 00:12:32.891 "assigned_rate_limits": { 00:12:32.891 "rw_ios_per_sec": 0, 00:12:32.891 "rw_mbytes_per_sec": 0, 00:12:32.891 "r_mbytes_per_sec": 0, 00:12:32.891 "w_mbytes_per_sec": 0 00:12:32.891 }, 00:12:32.891 "claimed": true, 00:12:32.891 "claim_type": "exclusive_write", 00:12:32.891 "zoned": false, 00:12:32.891 "supported_io_types": { 00:12:32.891 "read": true, 00:12:32.892 "write": true, 00:12:32.892 "unmap": true, 00:12:32.892 "flush": true, 00:12:32.892 "reset": true, 00:12:32.892 "nvme_admin": false, 00:12:32.892 "nvme_io": false, 00:12:32.892 "nvme_io_md": false, 00:12:32.892 "write_zeroes": true, 00:12:32.892 "zcopy": true, 00:12:32.892 "get_zone_info": false, 00:12:32.892 "zone_management": false, 00:12:32.892 "zone_append": false, 00:12:32.892 "compare": false, 00:12:32.892 "compare_and_write": false, 00:12:32.892 "abort": true, 00:12:32.892 "seek_hole": false, 00:12:32.892 "seek_data": false, 00:12:32.892 "copy": true, 00:12:32.892 "nvme_iov_md": false 00:12:32.892 }, 00:12:32.892 "memory_domains": [ 00:12:32.892 { 00:12:32.892 "dma_device_id": "system", 00:12:32.892 "dma_device_type": 1 00:12:32.892 }, 00:12:32.892 { 00:12:32.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.892 "dma_device_type": 2 00:12:32.892 } 00:12:32.892 ], 00:12:32.892 "driver_specific": {} 00:12:32.892 }' 00:12:32.892 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.150 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.408 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:33.408 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:33.408 [2024-07-15 10:19:58.092187] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.408 [2024-07-15 10:19:58.092211] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.408 [2024-07-15 10:19:58.092241] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.408 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.666 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.666 "name": "Existed_Raid", 00:12:33.666 "uuid": "c6f2b590-bcaf-4b52-a920-0eacf35e2478", 00:12:33.666 "strip_size_kb": 64, 00:12:33.666 "state": "offline", 00:12:33.666 "raid_level": "concat", 00:12:33.666 "superblock": false, 00:12:33.666 "num_base_bdevs": 3, 00:12:33.666 "num_base_bdevs_discovered": 2, 00:12:33.666 "num_base_bdevs_operational": 2, 00:12:33.666 "base_bdevs_list": [ 00:12:33.666 { 00:12:33.666 "name": null, 00:12:33.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.666 "is_configured": false, 00:12:33.666 "data_offset": 0, 00:12:33.666 "data_size": 65536 00:12:33.666 }, 00:12:33.666 { 00:12:33.666 "name": "BaseBdev2", 00:12:33.666 "uuid": "db171e45-135e-456a-a877-06affc89fd96", 00:12:33.666 "is_configured": true, 00:12:33.666 "data_offset": 0, 00:12:33.666 "data_size": 65536 00:12:33.666 }, 00:12:33.666 { 00:12:33.666 "name": "BaseBdev3", 00:12:33.666 "uuid": "0cc6c0dd-d738-48ea-9f6e-54200eccbddd", 00:12:33.666 "is_configured": true, 00:12:33.666 "data_offset": 0, 00:12:33.666 "data_size": 65536 00:12:33.666 } 00:12:33.666 ] 00:12:33.666 }' 00:12:33.666 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.666 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.233 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:34.492 [2024-07-15 10:19:59.047536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.492 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:34.750 [2024-07-15 10:19:59.369980] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:34.750 [2024-07-15 10:19:59.370009] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22c1700 name Existed_Raid, state offline 00:12:34.750 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:34.750 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:34.750 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.750 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:35.009 BaseBdev2 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.009 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.267 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.525 [ 00:12:35.525 { 00:12:35.525 "name": "BaseBdev2", 00:12:35.525 "aliases": [ 00:12:35.525 "7d42ac6c-19f9-4a4b-96ad-e434e41feefc" 00:12:35.525 ], 00:12:35.525 "product_name": "Malloc disk", 00:12:35.525 "block_size": 512, 00:12:35.525 "num_blocks": 65536, 00:12:35.525 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:35.525 "assigned_rate_limits": { 00:12:35.525 "rw_ios_per_sec": 0, 00:12:35.525 "rw_mbytes_per_sec": 0, 00:12:35.525 "r_mbytes_per_sec": 0, 00:12:35.525 "w_mbytes_per_sec": 0 00:12:35.525 }, 00:12:35.525 "claimed": false, 00:12:35.525 "zoned": false, 00:12:35.525 "supported_io_types": { 00:12:35.525 "read": true, 00:12:35.525 "write": true, 00:12:35.525 "unmap": true, 00:12:35.525 "flush": true, 00:12:35.525 "reset": true, 00:12:35.525 "nvme_admin": false, 00:12:35.525 "nvme_io": false, 00:12:35.525 "nvme_io_md": false, 00:12:35.525 "write_zeroes": true, 00:12:35.525 "zcopy": true, 00:12:35.525 "get_zone_info": false, 00:12:35.525 "zone_management": false, 00:12:35.525 "zone_append": false, 00:12:35.525 "compare": false, 00:12:35.525 "compare_and_write": false, 00:12:35.525 "abort": true, 00:12:35.525 "seek_hole": false, 00:12:35.525 "seek_data": false, 00:12:35.525 "copy": true, 00:12:35.525 "nvme_iov_md": false 00:12:35.525 }, 00:12:35.525 "memory_domains": [ 00:12:35.525 { 00:12:35.525 "dma_device_id": "system", 00:12:35.525 "dma_device_type": 1 00:12:35.525 }, 00:12:35.525 { 00:12:35.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.525 "dma_device_type": 2 00:12:35.525 } 00:12:35.525 ], 00:12:35.525 "driver_specific": {} 00:12:35.525 } 00:12:35.525 ] 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:35.525 BaseBdev3 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:35.525 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.784 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:35.784 [ 00:12:35.784 { 00:12:35.784 "name": "BaseBdev3", 00:12:35.784 "aliases": [ 00:12:35.784 "769993ad-e16b-4cec-b3e7-59920526e4b7" 00:12:35.784 ], 00:12:35.784 "product_name": "Malloc disk", 00:12:35.784 "block_size": 512, 00:12:35.784 "num_blocks": 65536, 00:12:35.784 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:35.784 "assigned_rate_limits": { 00:12:35.784 "rw_ios_per_sec": 0, 00:12:35.784 "rw_mbytes_per_sec": 0, 00:12:35.784 "r_mbytes_per_sec": 0, 00:12:35.784 "w_mbytes_per_sec": 0 00:12:35.784 }, 00:12:35.784 "claimed": false, 00:12:35.784 "zoned": false, 00:12:35.784 "supported_io_types": { 00:12:35.784 "read": true, 00:12:35.784 "write": true, 00:12:35.784 "unmap": true, 00:12:35.784 "flush": true, 00:12:35.784 "reset": true, 00:12:35.784 "nvme_admin": false, 00:12:35.784 "nvme_io": false, 00:12:35.784 "nvme_io_md": false, 00:12:35.784 "write_zeroes": true, 00:12:35.784 "zcopy": true, 00:12:35.784 "get_zone_info": false, 00:12:35.784 "zone_management": false, 00:12:35.784 "zone_append": false, 00:12:35.784 "compare": false, 00:12:35.784 "compare_and_write": false, 00:12:35.784 "abort": true, 00:12:35.784 "seek_hole": false, 00:12:35.784 "seek_data": false, 00:12:35.784 "copy": true, 00:12:35.784 "nvme_iov_md": false 00:12:35.784 }, 00:12:35.784 "memory_domains": [ 00:12:35.784 { 00:12:35.784 "dma_device_id": "system", 00:12:35.784 "dma_device_type": 1 00:12:35.784 }, 00:12:35.784 { 00:12:35.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.784 "dma_device_type": 2 00:12:35.784 } 00:12:35.784 ], 00:12:35.784 "driver_specific": {} 00:12:35.784 } 00:12:35.784 ] 00:12:35.784 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:35.784 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:35.784 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:35.784 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:36.043 [2024-07-15 10:20:00.722814] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:36.043 [2024-07-15 10:20:00.722848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:36.043 [2024-07-15 10:20:00.722861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.043 [2024-07-15 10:20:00.723814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.043 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.301 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.301 "name": "Existed_Raid", 00:12:36.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.301 "strip_size_kb": 64, 00:12:36.301 "state": "configuring", 00:12:36.301 "raid_level": "concat", 00:12:36.301 "superblock": false, 00:12:36.301 "num_base_bdevs": 3, 00:12:36.301 "num_base_bdevs_discovered": 2, 00:12:36.301 "num_base_bdevs_operational": 3, 00:12:36.301 "base_bdevs_list": [ 00:12:36.301 { 00:12:36.301 "name": "BaseBdev1", 00:12:36.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.301 "is_configured": false, 00:12:36.301 "data_offset": 0, 00:12:36.301 "data_size": 0 00:12:36.301 }, 00:12:36.301 { 00:12:36.301 "name": "BaseBdev2", 00:12:36.301 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:36.301 "is_configured": true, 00:12:36.301 "data_offset": 0, 00:12:36.301 "data_size": 65536 00:12:36.301 }, 00:12:36.301 { 00:12:36.301 "name": "BaseBdev3", 00:12:36.301 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:36.301 "is_configured": true, 00:12:36.301 "data_offset": 0, 00:12:36.301 "data_size": 65536 00:12:36.301 } 00:12:36.301 ] 00:12:36.301 }' 00:12:36.301 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.301 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.867 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:36.867 [2024-07-15 10:20:01.512838] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.867 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.868 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.126 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.126 "name": "Existed_Raid", 00:12:37.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.126 "strip_size_kb": 64, 00:12:37.126 "state": "configuring", 00:12:37.126 "raid_level": "concat", 00:12:37.126 "superblock": false, 00:12:37.126 "num_base_bdevs": 3, 00:12:37.126 "num_base_bdevs_discovered": 1, 00:12:37.126 "num_base_bdevs_operational": 3, 00:12:37.126 "base_bdevs_list": [ 00:12:37.126 { 00:12:37.126 "name": "BaseBdev1", 00:12:37.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.126 "is_configured": false, 00:12:37.126 "data_offset": 0, 00:12:37.126 "data_size": 0 00:12:37.126 }, 00:12:37.126 { 00:12:37.126 "name": null, 00:12:37.126 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:37.126 "is_configured": false, 00:12:37.126 "data_offset": 0, 00:12:37.126 "data_size": 65536 00:12:37.126 }, 00:12:37.126 { 00:12:37.126 "name": "BaseBdev3", 00:12:37.126 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:37.126 "is_configured": true, 00:12:37.126 "data_offset": 0, 00:12:37.126 "data_size": 65536 00:12:37.126 } 00:12:37.126 ] 00:12:37.126 }' 00:12:37.126 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.126 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.694 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.694 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.694 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:37.694 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.953 [2024-07-15 10:20:02.534253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.953 BaseBdev1 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:37.953 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:38.212 [ 00:12:38.212 { 00:12:38.212 "name": "BaseBdev1", 00:12:38.212 "aliases": [ 00:12:38.212 "ad434ac4-ba4e-4278-8751-9a8a837be3ec" 00:12:38.212 ], 00:12:38.212 "product_name": "Malloc disk", 00:12:38.212 "block_size": 512, 00:12:38.212 "num_blocks": 65536, 00:12:38.212 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:38.212 "assigned_rate_limits": { 00:12:38.212 "rw_ios_per_sec": 0, 00:12:38.212 "rw_mbytes_per_sec": 0, 00:12:38.212 "r_mbytes_per_sec": 0, 00:12:38.212 "w_mbytes_per_sec": 0 00:12:38.212 }, 00:12:38.212 "claimed": true, 00:12:38.212 "claim_type": "exclusive_write", 00:12:38.212 "zoned": false, 00:12:38.212 "supported_io_types": { 00:12:38.212 "read": true, 00:12:38.212 "write": true, 00:12:38.212 "unmap": true, 00:12:38.212 "flush": true, 00:12:38.212 "reset": true, 00:12:38.212 "nvme_admin": false, 00:12:38.212 "nvme_io": false, 00:12:38.212 "nvme_io_md": false, 00:12:38.212 "write_zeroes": true, 00:12:38.212 "zcopy": true, 00:12:38.212 "get_zone_info": false, 00:12:38.212 "zone_management": false, 00:12:38.212 "zone_append": false, 00:12:38.212 "compare": false, 00:12:38.212 "compare_and_write": false, 00:12:38.212 "abort": true, 00:12:38.212 "seek_hole": false, 00:12:38.212 "seek_data": false, 00:12:38.212 "copy": true, 00:12:38.212 "nvme_iov_md": false 00:12:38.212 }, 00:12:38.212 "memory_domains": [ 00:12:38.212 { 00:12:38.212 "dma_device_id": "system", 00:12:38.212 "dma_device_type": 1 00:12:38.212 }, 00:12:38.212 { 00:12:38.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.212 "dma_device_type": 2 00:12:38.212 } 00:12:38.212 ], 00:12:38.212 "driver_specific": {} 00:12:38.212 } 00:12:38.212 ] 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.212 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.470 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.470 "name": "Existed_Raid", 00:12:38.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.470 "strip_size_kb": 64, 00:12:38.470 "state": "configuring", 00:12:38.470 "raid_level": "concat", 00:12:38.470 "superblock": false, 00:12:38.470 "num_base_bdevs": 3, 00:12:38.470 "num_base_bdevs_discovered": 2, 00:12:38.470 "num_base_bdevs_operational": 3, 00:12:38.470 "base_bdevs_list": [ 00:12:38.470 { 00:12:38.470 "name": "BaseBdev1", 00:12:38.470 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:38.470 "is_configured": true, 00:12:38.470 "data_offset": 0, 00:12:38.470 "data_size": 65536 00:12:38.470 }, 00:12:38.470 { 00:12:38.470 "name": null, 00:12:38.470 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:38.470 "is_configured": false, 00:12:38.470 "data_offset": 0, 00:12:38.470 "data_size": 65536 00:12:38.470 }, 00:12:38.470 { 00:12:38.470 "name": "BaseBdev3", 00:12:38.470 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:38.470 "is_configured": true, 00:12:38.470 "data_offset": 0, 00:12:38.470 "data_size": 65536 00:12:38.470 } 00:12:38.470 ] 00:12:38.470 }' 00:12:38.470 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.470 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.035 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:39.035 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.035 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:39.035 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:39.293 [2024-07-15 10:20:03.853697] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:39.293 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.294 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.294 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:39.294 "name": "Existed_Raid", 00:12:39.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.294 "strip_size_kb": 64, 00:12:39.294 "state": "configuring", 00:12:39.294 "raid_level": "concat", 00:12:39.294 "superblock": false, 00:12:39.294 "num_base_bdevs": 3, 00:12:39.294 "num_base_bdevs_discovered": 1, 00:12:39.294 "num_base_bdevs_operational": 3, 00:12:39.294 "base_bdevs_list": [ 00:12:39.294 { 00:12:39.294 "name": "BaseBdev1", 00:12:39.294 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:39.294 "is_configured": true, 00:12:39.294 "data_offset": 0, 00:12:39.294 "data_size": 65536 00:12:39.294 }, 00:12:39.294 { 00:12:39.294 "name": null, 00:12:39.294 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:39.294 "is_configured": false, 00:12:39.294 "data_offset": 0, 00:12:39.294 "data_size": 65536 00:12:39.294 }, 00:12:39.294 { 00:12:39.294 "name": null, 00:12:39.294 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:39.294 "is_configured": false, 00:12:39.294 "data_offset": 0, 00:12:39.294 "data_size": 65536 00:12:39.294 } 00:12:39.294 ] 00:12:39.294 }' 00:12:39.294 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:39.294 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.861 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.861 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:40.120 [2024-07-15 10:20:04.860316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.120 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.379 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.379 "name": "Existed_Raid", 00:12:40.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.379 "strip_size_kb": 64, 00:12:40.379 "state": "configuring", 00:12:40.379 "raid_level": "concat", 00:12:40.379 "superblock": false, 00:12:40.379 "num_base_bdevs": 3, 00:12:40.379 "num_base_bdevs_discovered": 2, 00:12:40.379 "num_base_bdevs_operational": 3, 00:12:40.379 "base_bdevs_list": [ 00:12:40.379 { 00:12:40.379 "name": "BaseBdev1", 00:12:40.379 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:40.379 "is_configured": true, 00:12:40.379 "data_offset": 0, 00:12:40.379 "data_size": 65536 00:12:40.379 }, 00:12:40.379 { 00:12:40.379 "name": null, 00:12:40.379 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:40.379 "is_configured": false, 00:12:40.379 "data_offset": 0, 00:12:40.379 "data_size": 65536 00:12:40.379 }, 00:12:40.379 { 00:12:40.379 "name": "BaseBdev3", 00:12:40.379 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:40.379 "is_configured": true, 00:12:40.379 "data_offset": 0, 00:12:40.379 "data_size": 65536 00:12:40.379 } 00:12:40.379 ] 00:12:40.379 }' 00:12:40.379 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.379 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.946 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.946 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:40.946 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:40.946 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:41.204 [2024-07-15 10:20:05.826800] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.204 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.205 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.463 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.464 "name": "Existed_Raid", 00:12:41.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.464 "strip_size_kb": 64, 00:12:41.464 "state": "configuring", 00:12:41.464 "raid_level": "concat", 00:12:41.464 "superblock": false, 00:12:41.464 "num_base_bdevs": 3, 00:12:41.464 "num_base_bdevs_discovered": 1, 00:12:41.464 "num_base_bdevs_operational": 3, 00:12:41.464 "base_bdevs_list": [ 00:12:41.464 { 00:12:41.464 "name": null, 00:12:41.464 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:41.464 "is_configured": false, 00:12:41.464 "data_offset": 0, 00:12:41.464 "data_size": 65536 00:12:41.464 }, 00:12:41.464 { 00:12:41.464 "name": null, 00:12:41.464 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:41.464 "is_configured": false, 00:12:41.464 "data_offset": 0, 00:12:41.464 "data_size": 65536 00:12:41.464 }, 00:12:41.464 { 00:12:41.464 "name": "BaseBdev3", 00:12:41.464 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:41.464 "is_configured": true, 00:12:41.464 "data_offset": 0, 00:12:41.464 "data_size": 65536 00:12:41.464 } 00:12:41.464 ] 00:12:41.464 }' 00:12:41.464 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.464 10:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.722 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:41.722 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.980 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:41.980 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:42.239 [2024-07-15 10:20:06.834848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.239 10:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.239 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.239 "name": "Existed_Raid", 00:12:42.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.239 "strip_size_kb": 64, 00:12:42.239 "state": "configuring", 00:12:42.239 "raid_level": "concat", 00:12:42.239 "superblock": false, 00:12:42.239 "num_base_bdevs": 3, 00:12:42.239 "num_base_bdevs_discovered": 2, 00:12:42.239 "num_base_bdevs_operational": 3, 00:12:42.239 "base_bdevs_list": [ 00:12:42.239 { 00:12:42.239 "name": null, 00:12:42.239 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:42.239 "is_configured": false, 00:12:42.239 "data_offset": 0, 00:12:42.239 "data_size": 65536 00:12:42.239 }, 00:12:42.239 { 00:12:42.239 "name": "BaseBdev2", 00:12:42.239 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:42.239 "is_configured": true, 00:12:42.239 "data_offset": 0, 00:12:42.239 "data_size": 65536 00:12:42.239 }, 00:12:42.239 { 00:12:42.239 "name": "BaseBdev3", 00:12:42.239 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:42.239 "is_configured": true, 00:12:42.239 "data_offset": 0, 00:12:42.239 "data_size": 65536 00:12:42.239 } 00:12:42.239 ] 00:12:42.239 }' 00:12:42.239 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.239 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.806 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:42.807 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.065 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:43.065 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.065 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:43.065 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ad434ac4-ba4e-4278-8751-9a8a837be3ec 00:12:43.323 [2024-07-15 10:20:07.984660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:43.323 [2024-07-15 10:20:07.984691] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22c1af0 00:12:43.323 [2024-07-15 10:20:07.984696] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:43.323 [2024-07-15 10:20:07.984823] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22c0f10 00:12:43.323 [2024-07-15 10:20:07.984897] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22c1af0 00:12:43.323 [2024-07-15 10:20:07.984929] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x22c1af0 00:12:43.323 [2024-07-15 10:20:07.985051] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.323 NewBaseBdev 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:43.323 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:43.582 [ 00:12:43.582 { 00:12:43.582 "name": "NewBaseBdev", 00:12:43.582 "aliases": [ 00:12:43.582 "ad434ac4-ba4e-4278-8751-9a8a837be3ec" 00:12:43.582 ], 00:12:43.582 "product_name": "Malloc disk", 00:12:43.582 "block_size": 512, 00:12:43.582 "num_blocks": 65536, 00:12:43.582 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:43.582 "assigned_rate_limits": { 00:12:43.582 "rw_ios_per_sec": 0, 00:12:43.582 "rw_mbytes_per_sec": 0, 00:12:43.582 "r_mbytes_per_sec": 0, 00:12:43.582 "w_mbytes_per_sec": 0 00:12:43.582 }, 00:12:43.582 "claimed": true, 00:12:43.582 "claim_type": "exclusive_write", 00:12:43.582 "zoned": false, 00:12:43.582 "supported_io_types": { 00:12:43.582 "read": true, 00:12:43.582 "write": true, 00:12:43.582 "unmap": true, 00:12:43.582 "flush": true, 00:12:43.582 "reset": true, 00:12:43.582 "nvme_admin": false, 00:12:43.582 "nvme_io": false, 00:12:43.582 "nvme_io_md": false, 00:12:43.582 "write_zeroes": true, 00:12:43.582 "zcopy": true, 00:12:43.582 "get_zone_info": false, 00:12:43.582 "zone_management": false, 00:12:43.582 "zone_append": false, 00:12:43.582 "compare": false, 00:12:43.582 "compare_and_write": false, 00:12:43.582 "abort": true, 00:12:43.582 "seek_hole": false, 00:12:43.582 "seek_data": false, 00:12:43.582 "copy": true, 00:12:43.582 "nvme_iov_md": false 00:12:43.582 }, 00:12:43.582 "memory_domains": [ 00:12:43.582 { 00:12:43.582 "dma_device_id": "system", 00:12:43.582 "dma_device_type": 1 00:12:43.582 }, 00:12:43.582 { 00:12:43.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.582 "dma_device_type": 2 00:12:43.582 } 00:12:43.582 ], 00:12:43.582 "driver_specific": {} 00:12:43.582 } 00:12:43.582 ] 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.582 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.865 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.865 "name": "Existed_Raid", 00:12:43.865 "uuid": "eae5c89c-dae6-44fe-936e-28e9e0b416bb", 00:12:43.865 "strip_size_kb": 64, 00:12:43.865 "state": "online", 00:12:43.865 "raid_level": "concat", 00:12:43.865 "superblock": false, 00:12:43.865 "num_base_bdevs": 3, 00:12:43.865 "num_base_bdevs_discovered": 3, 00:12:43.865 "num_base_bdevs_operational": 3, 00:12:43.865 "base_bdevs_list": [ 00:12:43.865 { 00:12:43.865 "name": "NewBaseBdev", 00:12:43.865 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:43.865 "is_configured": true, 00:12:43.865 "data_offset": 0, 00:12:43.865 "data_size": 65536 00:12:43.865 }, 00:12:43.865 { 00:12:43.865 "name": "BaseBdev2", 00:12:43.865 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:43.865 "is_configured": true, 00:12:43.865 "data_offset": 0, 00:12:43.865 "data_size": 65536 00:12:43.865 }, 00:12:43.865 { 00:12:43.865 "name": "BaseBdev3", 00:12:43.865 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:43.865 "is_configured": true, 00:12:43.865 "data_offset": 0, 00:12:43.865 "data_size": 65536 00:12:43.865 } 00:12:43.865 ] 00:12:43.865 }' 00:12:43.866 10:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.866 10:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:44.474 [2024-07-15 10:20:09.155909] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.474 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:44.474 "name": "Existed_Raid", 00:12:44.474 "aliases": [ 00:12:44.474 "eae5c89c-dae6-44fe-936e-28e9e0b416bb" 00:12:44.474 ], 00:12:44.474 "product_name": "Raid Volume", 00:12:44.474 "block_size": 512, 00:12:44.474 "num_blocks": 196608, 00:12:44.474 "uuid": "eae5c89c-dae6-44fe-936e-28e9e0b416bb", 00:12:44.474 "assigned_rate_limits": { 00:12:44.474 "rw_ios_per_sec": 0, 00:12:44.474 "rw_mbytes_per_sec": 0, 00:12:44.474 "r_mbytes_per_sec": 0, 00:12:44.474 "w_mbytes_per_sec": 0 00:12:44.474 }, 00:12:44.474 "claimed": false, 00:12:44.474 "zoned": false, 00:12:44.474 "supported_io_types": { 00:12:44.474 "read": true, 00:12:44.474 "write": true, 00:12:44.474 "unmap": true, 00:12:44.474 "flush": true, 00:12:44.474 "reset": true, 00:12:44.474 "nvme_admin": false, 00:12:44.474 "nvme_io": false, 00:12:44.474 "nvme_io_md": false, 00:12:44.474 "write_zeroes": true, 00:12:44.474 "zcopy": false, 00:12:44.474 "get_zone_info": false, 00:12:44.474 "zone_management": false, 00:12:44.474 "zone_append": false, 00:12:44.474 "compare": false, 00:12:44.474 "compare_and_write": false, 00:12:44.474 "abort": false, 00:12:44.474 "seek_hole": false, 00:12:44.474 "seek_data": false, 00:12:44.474 "copy": false, 00:12:44.474 "nvme_iov_md": false 00:12:44.474 }, 00:12:44.474 "memory_domains": [ 00:12:44.474 { 00:12:44.474 "dma_device_id": "system", 00:12:44.474 "dma_device_type": 1 00:12:44.474 }, 00:12:44.474 { 00:12:44.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.474 "dma_device_type": 2 00:12:44.474 }, 00:12:44.474 { 00:12:44.474 "dma_device_id": "system", 00:12:44.474 "dma_device_type": 1 00:12:44.474 }, 00:12:44.474 { 00:12:44.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.474 "dma_device_type": 2 00:12:44.474 }, 00:12:44.474 { 00:12:44.474 "dma_device_id": "system", 00:12:44.474 "dma_device_type": 1 00:12:44.474 }, 00:12:44.474 { 00:12:44.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.474 "dma_device_type": 2 00:12:44.474 } 00:12:44.474 ], 00:12:44.474 "driver_specific": { 00:12:44.474 "raid": { 00:12:44.475 "uuid": "eae5c89c-dae6-44fe-936e-28e9e0b416bb", 00:12:44.475 "strip_size_kb": 64, 00:12:44.475 "state": "online", 00:12:44.475 "raid_level": "concat", 00:12:44.475 "superblock": false, 00:12:44.475 "num_base_bdevs": 3, 00:12:44.475 "num_base_bdevs_discovered": 3, 00:12:44.475 "num_base_bdevs_operational": 3, 00:12:44.475 "base_bdevs_list": [ 00:12:44.475 { 00:12:44.475 "name": "NewBaseBdev", 00:12:44.475 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 0, 00:12:44.475 "data_size": 65536 00:12:44.475 }, 00:12:44.475 { 00:12:44.475 "name": "BaseBdev2", 00:12:44.475 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 0, 00:12:44.475 "data_size": 65536 00:12:44.475 }, 00:12:44.475 { 00:12:44.475 "name": "BaseBdev3", 00:12:44.475 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:44.475 "is_configured": true, 00:12:44.475 "data_offset": 0, 00:12:44.475 "data_size": 65536 00:12:44.475 } 00:12:44.475 ] 00:12:44.475 } 00:12:44.475 } 00:12:44.475 }' 00:12:44.475 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.475 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:44.475 BaseBdev2 00:12:44.475 BaseBdev3' 00:12:44.475 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:44.475 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:44.475 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:44.733 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:44.733 "name": "NewBaseBdev", 00:12:44.733 "aliases": [ 00:12:44.733 "ad434ac4-ba4e-4278-8751-9a8a837be3ec" 00:12:44.733 ], 00:12:44.733 "product_name": "Malloc disk", 00:12:44.733 "block_size": 512, 00:12:44.733 "num_blocks": 65536, 00:12:44.733 "uuid": "ad434ac4-ba4e-4278-8751-9a8a837be3ec", 00:12:44.733 "assigned_rate_limits": { 00:12:44.733 "rw_ios_per_sec": 0, 00:12:44.733 "rw_mbytes_per_sec": 0, 00:12:44.733 "r_mbytes_per_sec": 0, 00:12:44.733 "w_mbytes_per_sec": 0 00:12:44.733 }, 00:12:44.733 "claimed": true, 00:12:44.733 "claim_type": "exclusive_write", 00:12:44.733 "zoned": false, 00:12:44.733 "supported_io_types": { 00:12:44.733 "read": true, 00:12:44.733 "write": true, 00:12:44.733 "unmap": true, 00:12:44.733 "flush": true, 00:12:44.733 "reset": true, 00:12:44.733 "nvme_admin": false, 00:12:44.733 "nvme_io": false, 00:12:44.733 "nvme_io_md": false, 00:12:44.733 "write_zeroes": true, 00:12:44.733 "zcopy": true, 00:12:44.733 "get_zone_info": false, 00:12:44.733 "zone_management": false, 00:12:44.733 "zone_append": false, 00:12:44.733 "compare": false, 00:12:44.733 "compare_and_write": false, 00:12:44.733 "abort": true, 00:12:44.733 "seek_hole": false, 00:12:44.733 "seek_data": false, 00:12:44.733 "copy": true, 00:12:44.733 "nvme_iov_md": false 00:12:44.733 }, 00:12:44.733 "memory_domains": [ 00:12:44.733 { 00:12:44.733 "dma_device_id": "system", 00:12:44.733 "dma_device_type": 1 00:12:44.733 }, 00:12:44.733 { 00:12:44.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.733 "dma_device_type": 2 00:12:44.733 } 00:12:44.733 ], 00:12:44.733 "driver_specific": {} 00:12:44.733 }' 00:12:44.733 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.733 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:44.733 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:44.733 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.733 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:44.990 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:45.247 "name": "BaseBdev2", 00:12:45.247 "aliases": [ 00:12:45.247 "7d42ac6c-19f9-4a4b-96ad-e434e41feefc" 00:12:45.247 ], 00:12:45.247 "product_name": "Malloc disk", 00:12:45.247 "block_size": 512, 00:12:45.247 "num_blocks": 65536, 00:12:45.247 "uuid": "7d42ac6c-19f9-4a4b-96ad-e434e41feefc", 00:12:45.247 "assigned_rate_limits": { 00:12:45.247 "rw_ios_per_sec": 0, 00:12:45.247 "rw_mbytes_per_sec": 0, 00:12:45.247 "r_mbytes_per_sec": 0, 00:12:45.247 "w_mbytes_per_sec": 0 00:12:45.247 }, 00:12:45.247 "claimed": true, 00:12:45.247 "claim_type": "exclusive_write", 00:12:45.247 "zoned": false, 00:12:45.247 "supported_io_types": { 00:12:45.247 "read": true, 00:12:45.247 "write": true, 00:12:45.247 "unmap": true, 00:12:45.247 "flush": true, 00:12:45.247 "reset": true, 00:12:45.247 "nvme_admin": false, 00:12:45.247 "nvme_io": false, 00:12:45.247 "nvme_io_md": false, 00:12:45.247 "write_zeroes": true, 00:12:45.247 "zcopy": true, 00:12:45.247 "get_zone_info": false, 00:12:45.247 "zone_management": false, 00:12:45.247 "zone_append": false, 00:12:45.247 "compare": false, 00:12:45.247 "compare_and_write": false, 00:12:45.247 "abort": true, 00:12:45.247 "seek_hole": false, 00:12:45.247 "seek_data": false, 00:12:45.247 "copy": true, 00:12:45.247 "nvme_iov_md": false 00:12:45.247 }, 00:12:45.247 "memory_domains": [ 00:12:45.247 { 00:12:45.247 "dma_device_id": "system", 00:12:45.247 "dma_device_type": 1 00:12:45.247 }, 00:12:45.247 { 00:12:45.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.247 "dma_device_type": 2 00:12:45.247 } 00:12:45.247 ], 00:12:45.247 "driver_specific": {} 00:12:45.247 }' 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:45.247 10:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.247 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.504 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.504 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.504 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:45.504 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:45.505 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:45.505 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:45.505 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:45.761 "name": "BaseBdev3", 00:12:45.761 "aliases": [ 00:12:45.761 "769993ad-e16b-4cec-b3e7-59920526e4b7" 00:12:45.761 ], 00:12:45.761 "product_name": "Malloc disk", 00:12:45.761 "block_size": 512, 00:12:45.761 "num_blocks": 65536, 00:12:45.761 "uuid": "769993ad-e16b-4cec-b3e7-59920526e4b7", 00:12:45.761 "assigned_rate_limits": { 00:12:45.761 "rw_ios_per_sec": 0, 00:12:45.761 "rw_mbytes_per_sec": 0, 00:12:45.761 "r_mbytes_per_sec": 0, 00:12:45.761 "w_mbytes_per_sec": 0 00:12:45.761 }, 00:12:45.761 "claimed": true, 00:12:45.761 "claim_type": "exclusive_write", 00:12:45.761 "zoned": false, 00:12:45.761 "supported_io_types": { 00:12:45.761 "read": true, 00:12:45.761 "write": true, 00:12:45.761 "unmap": true, 00:12:45.761 "flush": true, 00:12:45.761 "reset": true, 00:12:45.761 "nvme_admin": false, 00:12:45.761 "nvme_io": false, 00:12:45.761 "nvme_io_md": false, 00:12:45.761 "write_zeroes": true, 00:12:45.761 "zcopy": true, 00:12:45.761 "get_zone_info": false, 00:12:45.761 "zone_management": false, 00:12:45.761 "zone_append": false, 00:12:45.761 "compare": false, 00:12:45.761 "compare_and_write": false, 00:12:45.761 "abort": true, 00:12:45.761 "seek_hole": false, 00:12:45.761 "seek_data": false, 00:12:45.761 "copy": true, 00:12:45.761 "nvme_iov_md": false 00:12:45.761 }, 00:12:45.761 "memory_domains": [ 00:12:45.761 { 00:12:45.761 "dma_device_id": "system", 00:12:45.761 "dma_device_type": 1 00:12:45.761 }, 00:12:45.761 { 00:12:45.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.761 "dma_device_type": 2 00:12:45.761 } 00:12:45.761 ], 00:12:45.761 "driver_specific": {} 00:12:45.761 }' 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:45.761 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:46.019 [2024-07-15 10:20:10.755850] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.019 [2024-07-15 10:20:10.755872] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.019 [2024-07-15 10:20:10.755918] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.019 [2024-07-15 10:20:10.755952] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.019 [2024-07-15 10:20:10.755960] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22c1af0 name Existed_Raid, state offline 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1776825 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1776825 ']' 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1776825 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:46.019 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776825 00:12:46.277 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:46.277 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:46.277 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776825' 00:12:46.277 killing process with pid 1776825 00:12:46.277 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1776825 00:12:46.277 [2024-07-15 10:20:10.821542] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.277 10:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1776825 00:12:46.277 [2024-07-15 10:20:10.843804] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.277 10:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:46.277 00:12:46.277 real 0m21.274s 00:12:46.277 user 0m38.843s 00:12:46.277 sys 0m4.059s 00:12:46.277 10:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.277 10:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.277 ************************************ 00:12:46.277 END TEST raid_state_function_test 00:12:46.277 ************************************ 00:12:46.277 10:20:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:46.277 10:20:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:46.277 10:20:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:46.277 10:20:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.277 10:20:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.534 ************************************ 00:12:46.534 START TEST raid_state_function_test_sb 00:12:46.534 ************************************ 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1781009 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1781009' 00:12:46.534 Process raid pid: 1781009 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1781009 /var/tmp/spdk-raid.sock 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1781009 ']' 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:46.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.534 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:46.535 [2024-07-15 10:20:11.150770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:46.535 [2024-07-15 10:20:11.150812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:46.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:46.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:46.535 [2024-07-15 10:20:11.242175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.535 [2024-07-15 10:20:11.315936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.791 [2024-07-15 10:20:11.375829] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.791 [2024-07-15 10:20:11.375859] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.353 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.353 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:12:47.353 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:47.353 [2024-07-15 10:20:12.091603] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.353 [2024-07-15 10:20:12.091634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.353 [2024-07-15 10:20:12.091642] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.353 [2024-07-15 10:20:12.091649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.353 [2024-07-15 10:20:12.091655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.353 [2024-07-15 10:20:12.091662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.353 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.611 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.611 "name": "Existed_Raid", 00:12:47.611 "uuid": "43effa32-6577-4435-b01d-26a1d998eea1", 00:12:47.611 "strip_size_kb": 64, 00:12:47.611 "state": "configuring", 00:12:47.611 "raid_level": "concat", 00:12:47.611 "superblock": true, 00:12:47.611 "num_base_bdevs": 3, 00:12:47.611 "num_base_bdevs_discovered": 0, 00:12:47.611 "num_base_bdevs_operational": 3, 00:12:47.611 "base_bdevs_list": [ 00:12:47.611 { 00:12:47.611 "name": "BaseBdev1", 00:12:47.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.611 "is_configured": false, 00:12:47.611 "data_offset": 0, 00:12:47.611 "data_size": 0 00:12:47.611 }, 00:12:47.611 { 00:12:47.611 "name": "BaseBdev2", 00:12:47.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.611 "is_configured": false, 00:12:47.611 "data_offset": 0, 00:12:47.611 "data_size": 0 00:12:47.611 }, 00:12:47.611 { 00:12:47.611 "name": "BaseBdev3", 00:12:47.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.611 "is_configured": false, 00:12:47.611 "data_offset": 0, 00:12:47.611 "data_size": 0 00:12:47.611 } 00:12:47.611 ] 00:12:47.611 }' 00:12:47.611 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.611 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.174 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:48.174 [2024-07-15 10:20:12.917640] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.174 [2024-07-15 10:20:12.917662] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2328f40 name Existed_Raid, state configuring 00:12:48.174 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:48.448 [2024-07-15 10:20:13.086094] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.448 [2024-07-15 10:20:13.086114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.448 [2024-07-15 10:20:13.086120] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.448 [2024-07-15 10:20:13.086127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.448 [2024-07-15 10:20:13.086133] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.448 [2024-07-15 10:20:13.086140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.448 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.704 [2024-07-15 10:20:13.258836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.704 BaseBdev1 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:48.704 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.960 [ 00:12:48.960 { 00:12:48.960 "name": "BaseBdev1", 00:12:48.960 "aliases": [ 00:12:48.960 "01e6f528-1f81-4136-b192-7471879b24f5" 00:12:48.960 ], 00:12:48.960 "product_name": "Malloc disk", 00:12:48.960 "block_size": 512, 00:12:48.960 "num_blocks": 65536, 00:12:48.960 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:48.960 "assigned_rate_limits": { 00:12:48.960 "rw_ios_per_sec": 0, 00:12:48.960 "rw_mbytes_per_sec": 0, 00:12:48.960 "r_mbytes_per_sec": 0, 00:12:48.960 "w_mbytes_per_sec": 0 00:12:48.960 }, 00:12:48.960 "claimed": true, 00:12:48.960 "claim_type": "exclusive_write", 00:12:48.960 "zoned": false, 00:12:48.960 "supported_io_types": { 00:12:48.960 "read": true, 00:12:48.960 "write": true, 00:12:48.960 "unmap": true, 00:12:48.960 "flush": true, 00:12:48.960 "reset": true, 00:12:48.960 "nvme_admin": false, 00:12:48.960 "nvme_io": false, 00:12:48.960 "nvme_io_md": false, 00:12:48.960 "write_zeroes": true, 00:12:48.960 "zcopy": true, 00:12:48.960 "get_zone_info": false, 00:12:48.960 "zone_management": false, 00:12:48.960 "zone_append": false, 00:12:48.960 "compare": false, 00:12:48.960 "compare_and_write": false, 00:12:48.960 "abort": true, 00:12:48.960 "seek_hole": false, 00:12:48.960 "seek_data": false, 00:12:48.960 "copy": true, 00:12:48.960 "nvme_iov_md": false 00:12:48.960 }, 00:12:48.960 "memory_domains": [ 00:12:48.960 { 00:12:48.960 "dma_device_id": "system", 00:12:48.960 "dma_device_type": 1 00:12:48.960 }, 00:12:48.960 { 00:12:48.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.960 "dma_device_type": 2 00:12:48.960 } 00:12:48.960 ], 00:12:48.960 "driver_specific": {} 00:12:48.960 } 00:12:48.960 ] 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.960 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.216 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.216 "name": "Existed_Raid", 00:12:49.216 "uuid": "f19ca323-8cad-49e0-abca-0d619eb5ebe7", 00:12:49.217 "strip_size_kb": 64, 00:12:49.217 "state": "configuring", 00:12:49.217 "raid_level": "concat", 00:12:49.217 "superblock": true, 00:12:49.217 "num_base_bdevs": 3, 00:12:49.217 "num_base_bdevs_discovered": 1, 00:12:49.217 "num_base_bdevs_operational": 3, 00:12:49.217 "base_bdevs_list": [ 00:12:49.217 { 00:12:49.217 "name": "BaseBdev1", 00:12:49.217 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:49.217 "is_configured": true, 00:12:49.217 "data_offset": 2048, 00:12:49.217 "data_size": 63488 00:12:49.217 }, 00:12:49.217 { 00:12:49.217 "name": "BaseBdev2", 00:12:49.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.217 "is_configured": false, 00:12:49.217 "data_offset": 0, 00:12:49.217 "data_size": 0 00:12:49.217 }, 00:12:49.217 { 00:12:49.217 "name": "BaseBdev3", 00:12:49.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.217 "is_configured": false, 00:12:49.217 "data_offset": 0, 00:12:49.217 "data_size": 0 00:12:49.217 } 00:12:49.217 ] 00:12:49.217 }' 00:12:49.217 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.217 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.473 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:49.731 [2024-07-15 10:20:14.389889] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.731 [2024-07-15 10:20:14.389922] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2328810 name Existed_Raid, state configuring 00:12:49.731 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:49.988 [2024-07-15 10:20:14.562355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.988 [2024-07-15 10:20:14.563395] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.988 [2024-07-15 10:20:14.563420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.988 [2024-07-15 10:20:14.563427] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.988 [2024-07-15 10:20:14.563435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.988 "name": "Existed_Raid", 00:12:49.988 "uuid": "4c8cab95-343f-4fca-be7e-9e0f04b3f64c", 00:12:49.988 "strip_size_kb": 64, 00:12:49.988 "state": "configuring", 00:12:49.988 "raid_level": "concat", 00:12:49.988 "superblock": true, 00:12:49.988 "num_base_bdevs": 3, 00:12:49.988 "num_base_bdevs_discovered": 1, 00:12:49.988 "num_base_bdevs_operational": 3, 00:12:49.988 "base_bdevs_list": [ 00:12:49.988 { 00:12:49.988 "name": "BaseBdev1", 00:12:49.988 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:49.988 "is_configured": true, 00:12:49.988 "data_offset": 2048, 00:12:49.988 "data_size": 63488 00:12:49.988 }, 00:12:49.988 { 00:12:49.988 "name": "BaseBdev2", 00:12:49.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.988 "is_configured": false, 00:12:49.988 "data_offset": 0, 00:12:49.988 "data_size": 0 00:12:49.988 }, 00:12:49.988 { 00:12:49.988 "name": "BaseBdev3", 00:12:49.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.988 "is_configured": false, 00:12:49.988 "data_offset": 0, 00:12:49.988 "data_size": 0 00:12:49.988 } 00:12:49.988 ] 00:12:49.988 }' 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.988 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.551 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.807 [2024-07-15 10:20:15.383246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.807 BaseBdev2 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.807 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.064 [ 00:12:51.064 { 00:12:51.064 "name": "BaseBdev2", 00:12:51.064 "aliases": [ 00:12:51.064 "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6" 00:12:51.064 ], 00:12:51.064 "product_name": "Malloc disk", 00:12:51.064 "block_size": 512, 00:12:51.064 "num_blocks": 65536, 00:12:51.064 "uuid": "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6", 00:12:51.064 "assigned_rate_limits": { 00:12:51.064 "rw_ios_per_sec": 0, 00:12:51.064 "rw_mbytes_per_sec": 0, 00:12:51.064 "r_mbytes_per_sec": 0, 00:12:51.064 "w_mbytes_per_sec": 0 00:12:51.064 }, 00:12:51.064 "claimed": true, 00:12:51.064 "claim_type": "exclusive_write", 00:12:51.064 "zoned": false, 00:12:51.064 "supported_io_types": { 00:12:51.064 "read": true, 00:12:51.064 "write": true, 00:12:51.064 "unmap": true, 00:12:51.064 "flush": true, 00:12:51.064 "reset": true, 00:12:51.064 "nvme_admin": false, 00:12:51.064 "nvme_io": false, 00:12:51.064 "nvme_io_md": false, 00:12:51.064 "write_zeroes": true, 00:12:51.064 "zcopy": true, 00:12:51.064 "get_zone_info": false, 00:12:51.064 "zone_management": false, 00:12:51.064 "zone_append": false, 00:12:51.064 "compare": false, 00:12:51.064 "compare_and_write": false, 00:12:51.064 "abort": true, 00:12:51.064 "seek_hole": false, 00:12:51.064 "seek_data": false, 00:12:51.064 "copy": true, 00:12:51.064 "nvme_iov_md": false 00:12:51.064 }, 00:12:51.064 "memory_domains": [ 00:12:51.064 { 00:12:51.064 "dma_device_id": "system", 00:12:51.064 "dma_device_type": 1 00:12:51.064 }, 00:12:51.064 { 00:12:51.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.064 "dma_device_type": 2 00:12:51.064 } 00:12:51.064 ], 00:12:51.064 "driver_specific": {} 00:12:51.064 } 00:12:51.064 ] 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.064 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.321 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.321 "name": "Existed_Raid", 00:12:51.321 "uuid": "4c8cab95-343f-4fca-be7e-9e0f04b3f64c", 00:12:51.321 "strip_size_kb": 64, 00:12:51.321 "state": "configuring", 00:12:51.321 "raid_level": "concat", 00:12:51.321 "superblock": true, 00:12:51.321 "num_base_bdevs": 3, 00:12:51.321 "num_base_bdevs_discovered": 2, 00:12:51.321 "num_base_bdevs_operational": 3, 00:12:51.321 "base_bdevs_list": [ 00:12:51.321 { 00:12:51.321 "name": "BaseBdev1", 00:12:51.321 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:51.321 "is_configured": true, 00:12:51.321 "data_offset": 2048, 00:12:51.321 "data_size": 63488 00:12:51.321 }, 00:12:51.321 { 00:12:51.321 "name": "BaseBdev2", 00:12:51.321 "uuid": "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6", 00:12:51.321 "is_configured": true, 00:12:51.321 "data_offset": 2048, 00:12:51.321 "data_size": 63488 00:12:51.321 }, 00:12:51.321 { 00:12:51.321 "name": "BaseBdev3", 00:12:51.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.321 "is_configured": false, 00:12:51.321 "data_offset": 0, 00:12:51.321 "data_size": 0 00:12:51.321 } 00:12:51.321 ] 00:12:51.321 }' 00:12:51.321 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.321 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.885 [2024-07-15 10:20:16.540844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.885 [2024-07-15 10:20:16.540978] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2329700 00:12:51.885 [2024-07-15 10:20:16.540988] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:51.885 [2024-07-15 10:20:16.541108] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23293d0 00:12:51.885 [2024-07-15 10:20:16.541191] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2329700 00:12:51.885 [2024-07-15 10:20:16.541197] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2329700 00:12:51.885 [2024-07-15 10:20:16.541258] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.885 BaseBdev3 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:51.885 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.143 [ 00:12:52.143 { 00:12:52.143 "name": "BaseBdev3", 00:12:52.143 "aliases": [ 00:12:52.143 "f03121e0-674e-423d-86bd-01866a2daa0d" 00:12:52.143 ], 00:12:52.143 "product_name": "Malloc disk", 00:12:52.143 "block_size": 512, 00:12:52.143 "num_blocks": 65536, 00:12:52.143 "uuid": "f03121e0-674e-423d-86bd-01866a2daa0d", 00:12:52.143 "assigned_rate_limits": { 00:12:52.143 "rw_ios_per_sec": 0, 00:12:52.143 "rw_mbytes_per_sec": 0, 00:12:52.143 "r_mbytes_per_sec": 0, 00:12:52.143 "w_mbytes_per_sec": 0 00:12:52.143 }, 00:12:52.143 "claimed": true, 00:12:52.143 "claim_type": "exclusive_write", 00:12:52.143 "zoned": false, 00:12:52.143 "supported_io_types": { 00:12:52.143 "read": true, 00:12:52.143 "write": true, 00:12:52.143 "unmap": true, 00:12:52.143 "flush": true, 00:12:52.143 "reset": true, 00:12:52.143 "nvme_admin": false, 00:12:52.143 "nvme_io": false, 00:12:52.143 "nvme_io_md": false, 00:12:52.143 "write_zeroes": true, 00:12:52.143 "zcopy": true, 00:12:52.143 "get_zone_info": false, 00:12:52.143 "zone_management": false, 00:12:52.143 "zone_append": false, 00:12:52.143 "compare": false, 00:12:52.143 "compare_and_write": false, 00:12:52.143 "abort": true, 00:12:52.143 "seek_hole": false, 00:12:52.143 "seek_data": false, 00:12:52.143 "copy": true, 00:12:52.143 "nvme_iov_md": false 00:12:52.143 }, 00:12:52.143 "memory_domains": [ 00:12:52.143 { 00:12:52.143 "dma_device_id": "system", 00:12:52.143 "dma_device_type": 1 00:12:52.143 }, 00:12:52.143 { 00:12:52.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.143 "dma_device_type": 2 00:12:52.143 } 00:12:52.143 ], 00:12:52.143 "driver_specific": {} 00:12:52.143 } 00:12:52.143 ] 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.143 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.400 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.400 "name": "Existed_Raid", 00:12:52.400 "uuid": "4c8cab95-343f-4fca-be7e-9e0f04b3f64c", 00:12:52.400 "strip_size_kb": 64, 00:12:52.400 "state": "online", 00:12:52.400 "raid_level": "concat", 00:12:52.400 "superblock": true, 00:12:52.400 "num_base_bdevs": 3, 00:12:52.400 "num_base_bdevs_discovered": 3, 00:12:52.400 "num_base_bdevs_operational": 3, 00:12:52.400 "base_bdevs_list": [ 00:12:52.400 { 00:12:52.400 "name": "BaseBdev1", 00:12:52.400 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:52.400 "is_configured": true, 00:12:52.400 "data_offset": 2048, 00:12:52.400 "data_size": 63488 00:12:52.400 }, 00:12:52.400 { 00:12:52.400 "name": "BaseBdev2", 00:12:52.400 "uuid": "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6", 00:12:52.400 "is_configured": true, 00:12:52.400 "data_offset": 2048, 00:12:52.400 "data_size": 63488 00:12:52.400 }, 00:12:52.400 { 00:12:52.400 "name": "BaseBdev3", 00:12:52.400 "uuid": "f03121e0-674e-423d-86bd-01866a2daa0d", 00:12:52.400 "is_configured": true, 00:12:52.400 "data_offset": 2048, 00:12:52.400 "data_size": 63488 00:12:52.400 } 00:12:52.400 ] 00:12:52.400 }' 00:12:52.400 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.400 10:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:52.966 [2024-07-15 10:20:17.700032] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.966 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:52.966 "name": "Existed_Raid", 00:12:52.966 "aliases": [ 00:12:52.966 "4c8cab95-343f-4fca-be7e-9e0f04b3f64c" 00:12:52.966 ], 00:12:52.966 "product_name": "Raid Volume", 00:12:52.966 "block_size": 512, 00:12:52.966 "num_blocks": 190464, 00:12:52.966 "uuid": "4c8cab95-343f-4fca-be7e-9e0f04b3f64c", 00:12:52.966 "assigned_rate_limits": { 00:12:52.966 "rw_ios_per_sec": 0, 00:12:52.966 "rw_mbytes_per_sec": 0, 00:12:52.966 "r_mbytes_per_sec": 0, 00:12:52.966 "w_mbytes_per_sec": 0 00:12:52.966 }, 00:12:52.966 "claimed": false, 00:12:52.966 "zoned": false, 00:12:52.966 "supported_io_types": { 00:12:52.966 "read": true, 00:12:52.966 "write": true, 00:12:52.966 "unmap": true, 00:12:52.966 "flush": true, 00:12:52.966 "reset": true, 00:12:52.966 "nvme_admin": false, 00:12:52.966 "nvme_io": false, 00:12:52.966 "nvme_io_md": false, 00:12:52.966 "write_zeroes": true, 00:12:52.966 "zcopy": false, 00:12:52.966 "get_zone_info": false, 00:12:52.966 "zone_management": false, 00:12:52.966 "zone_append": false, 00:12:52.966 "compare": false, 00:12:52.966 "compare_and_write": false, 00:12:52.966 "abort": false, 00:12:52.966 "seek_hole": false, 00:12:52.966 "seek_data": false, 00:12:52.966 "copy": false, 00:12:52.966 "nvme_iov_md": false 00:12:52.966 }, 00:12:52.966 "memory_domains": [ 00:12:52.966 { 00:12:52.966 "dma_device_id": "system", 00:12:52.966 "dma_device_type": 1 00:12:52.966 }, 00:12:52.966 { 00:12:52.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.966 "dma_device_type": 2 00:12:52.966 }, 00:12:52.966 { 00:12:52.966 "dma_device_id": "system", 00:12:52.966 "dma_device_type": 1 00:12:52.966 }, 00:12:52.966 { 00:12:52.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.966 "dma_device_type": 2 00:12:52.966 }, 00:12:52.966 { 00:12:52.966 "dma_device_id": "system", 00:12:52.966 "dma_device_type": 1 00:12:52.966 }, 00:12:52.966 { 00:12:52.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.966 "dma_device_type": 2 00:12:52.966 } 00:12:52.966 ], 00:12:52.966 "driver_specific": { 00:12:52.966 "raid": { 00:12:52.967 "uuid": "4c8cab95-343f-4fca-be7e-9e0f04b3f64c", 00:12:52.967 "strip_size_kb": 64, 00:12:52.967 "state": "online", 00:12:52.967 "raid_level": "concat", 00:12:52.967 "superblock": true, 00:12:52.967 "num_base_bdevs": 3, 00:12:52.967 "num_base_bdevs_discovered": 3, 00:12:52.967 "num_base_bdevs_operational": 3, 00:12:52.967 "base_bdevs_list": [ 00:12:52.967 { 00:12:52.967 "name": "BaseBdev1", 00:12:52.967 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:52.967 "is_configured": true, 00:12:52.967 "data_offset": 2048, 00:12:52.967 "data_size": 63488 00:12:52.967 }, 00:12:52.967 { 00:12:52.967 "name": "BaseBdev2", 00:12:52.967 "uuid": "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6", 00:12:52.967 "is_configured": true, 00:12:52.967 "data_offset": 2048, 00:12:52.967 "data_size": 63488 00:12:52.967 }, 00:12:52.967 { 00:12:52.967 "name": "BaseBdev3", 00:12:52.967 "uuid": "f03121e0-674e-423d-86bd-01866a2daa0d", 00:12:52.967 "is_configured": true, 00:12:52.967 "data_offset": 2048, 00:12:52.967 "data_size": 63488 00:12:52.967 } 00:12:52.967 ] 00:12:52.967 } 00:12:52.967 } 00:12:52.967 }' 00:12:52.967 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.225 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:53.225 BaseBdev2 00:12:53.225 BaseBdev3' 00:12:53.225 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.225 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:53.225 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.225 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.225 "name": "BaseBdev1", 00:12:53.225 "aliases": [ 00:12:53.225 "01e6f528-1f81-4136-b192-7471879b24f5" 00:12:53.225 ], 00:12:53.225 "product_name": "Malloc disk", 00:12:53.225 "block_size": 512, 00:12:53.225 "num_blocks": 65536, 00:12:53.225 "uuid": "01e6f528-1f81-4136-b192-7471879b24f5", 00:12:53.225 "assigned_rate_limits": { 00:12:53.225 "rw_ios_per_sec": 0, 00:12:53.225 "rw_mbytes_per_sec": 0, 00:12:53.225 "r_mbytes_per_sec": 0, 00:12:53.225 "w_mbytes_per_sec": 0 00:12:53.225 }, 00:12:53.225 "claimed": true, 00:12:53.225 "claim_type": "exclusive_write", 00:12:53.225 "zoned": false, 00:12:53.225 "supported_io_types": { 00:12:53.225 "read": true, 00:12:53.225 "write": true, 00:12:53.225 "unmap": true, 00:12:53.225 "flush": true, 00:12:53.225 "reset": true, 00:12:53.225 "nvme_admin": false, 00:12:53.225 "nvme_io": false, 00:12:53.225 "nvme_io_md": false, 00:12:53.225 "write_zeroes": true, 00:12:53.225 "zcopy": true, 00:12:53.225 "get_zone_info": false, 00:12:53.225 "zone_management": false, 00:12:53.225 "zone_append": false, 00:12:53.225 "compare": false, 00:12:53.225 "compare_and_write": false, 00:12:53.225 "abort": true, 00:12:53.225 "seek_hole": false, 00:12:53.225 "seek_data": false, 00:12:53.225 "copy": true, 00:12:53.225 "nvme_iov_md": false 00:12:53.225 }, 00:12:53.225 "memory_domains": [ 00:12:53.225 { 00:12:53.225 "dma_device_id": "system", 00:12:53.225 "dma_device_type": 1 00:12:53.225 }, 00:12:53.225 { 00:12:53.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.225 "dma_device_type": 2 00:12:53.225 } 00:12:53.225 ], 00:12:53.225 "driver_specific": {} 00:12:53.225 }' 00:12:53.226 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.226 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.226 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.226 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:53.483 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.740 "name": "BaseBdev2", 00:12:53.740 "aliases": [ 00:12:53.740 "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6" 00:12:53.740 ], 00:12:53.740 "product_name": "Malloc disk", 00:12:53.740 "block_size": 512, 00:12:53.740 "num_blocks": 65536, 00:12:53.740 "uuid": "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6", 00:12:53.740 "assigned_rate_limits": { 00:12:53.740 "rw_ios_per_sec": 0, 00:12:53.740 "rw_mbytes_per_sec": 0, 00:12:53.740 "r_mbytes_per_sec": 0, 00:12:53.740 "w_mbytes_per_sec": 0 00:12:53.740 }, 00:12:53.740 "claimed": true, 00:12:53.740 "claim_type": "exclusive_write", 00:12:53.740 "zoned": false, 00:12:53.740 "supported_io_types": { 00:12:53.740 "read": true, 00:12:53.740 "write": true, 00:12:53.740 "unmap": true, 00:12:53.740 "flush": true, 00:12:53.740 "reset": true, 00:12:53.740 "nvme_admin": false, 00:12:53.740 "nvme_io": false, 00:12:53.740 "nvme_io_md": false, 00:12:53.740 "write_zeroes": true, 00:12:53.740 "zcopy": true, 00:12:53.740 "get_zone_info": false, 00:12:53.740 "zone_management": false, 00:12:53.740 "zone_append": false, 00:12:53.740 "compare": false, 00:12:53.740 "compare_and_write": false, 00:12:53.740 "abort": true, 00:12:53.740 "seek_hole": false, 00:12:53.740 "seek_data": false, 00:12:53.740 "copy": true, 00:12:53.740 "nvme_iov_md": false 00:12:53.740 }, 00:12:53.740 "memory_domains": [ 00:12:53.740 { 00:12:53.740 "dma_device_id": "system", 00:12:53.740 "dma_device_type": 1 00:12:53.740 }, 00:12:53.740 { 00:12:53.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.740 "dma_device_type": 2 00:12:53.740 } 00:12:53.740 ], 00:12:53.740 "driver_specific": {} 00:12:53.740 }' 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.740 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.997 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.254 "name": "BaseBdev3", 00:12:54.254 "aliases": [ 00:12:54.254 "f03121e0-674e-423d-86bd-01866a2daa0d" 00:12:54.254 ], 00:12:54.254 "product_name": "Malloc disk", 00:12:54.254 "block_size": 512, 00:12:54.254 "num_blocks": 65536, 00:12:54.254 "uuid": "f03121e0-674e-423d-86bd-01866a2daa0d", 00:12:54.254 "assigned_rate_limits": { 00:12:54.254 "rw_ios_per_sec": 0, 00:12:54.254 "rw_mbytes_per_sec": 0, 00:12:54.254 "r_mbytes_per_sec": 0, 00:12:54.254 "w_mbytes_per_sec": 0 00:12:54.254 }, 00:12:54.254 "claimed": true, 00:12:54.254 "claim_type": "exclusive_write", 00:12:54.254 "zoned": false, 00:12:54.254 "supported_io_types": { 00:12:54.254 "read": true, 00:12:54.254 "write": true, 00:12:54.254 "unmap": true, 00:12:54.254 "flush": true, 00:12:54.254 "reset": true, 00:12:54.254 "nvme_admin": false, 00:12:54.254 "nvme_io": false, 00:12:54.254 "nvme_io_md": false, 00:12:54.254 "write_zeroes": true, 00:12:54.254 "zcopy": true, 00:12:54.254 "get_zone_info": false, 00:12:54.254 "zone_management": false, 00:12:54.254 "zone_append": false, 00:12:54.254 "compare": false, 00:12:54.254 "compare_and_write": false, 00:12:54.254 "abort": true, 00:12:54.254 "seek_hole": false, 00:12:54.254 "seek_data": false, 00:12:54.254 "copy": true, 00:12:54.254 "nvme_iov_md": false 00:12:54.254 }, 00:12:54.254 "memory_domains": [ 00:12:54.254 { 00:12:54.254 "dma_device_id": "system", 00:12:54.254 "dma_device_type": 1 00:12:54.254 }, 00:12:54.254 { 00:12:54.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.254 "dma_device_type": 2 00:12:54.254 } 00:12:54.254 ], 00:12:54.254 "driver_specific": {} 00:12:54.254 }' 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.254 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.254 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.254 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.512 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.512 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.512 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:54.512 [2024-07-15 10:20:19.255965] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.512 [2024-07-15 10:20:19.255985] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.512 [2024-07-15 10:20:19.256012] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.512 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:54.512 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:54.512 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.513 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.773 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.773 "name": "Existed_Raid", 00:12:54.773 "uuid": "4c8cab95-343f-4fca-be7e-9e0f04b3f64c", 00:12:54.773 "strip_size_kb": 64, 00:12:54.773 "state": "offline", 00:12:54.773 "raid_level": "concat", 00:12:54.773 "superblock": true, 00:12:54.773 "num_base_bdevs": 3, 00:12:54.773 "num_base_bdevs_discovered": 2, 00:12:54.773 "num_base_bdevs_operational": 2, 00:12:54.773 "base_bdevs_list": [ 00:12:54.773 { 00:12:54.773 "name": null, 00:12:54.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.773 "is_configured": false, 00:12:54.773 "data_offset": 2048, 00:12:54.773 "data_size": 63488 00:12:54.773 }, 00:12:54.773 { 00:12:54.773 "name": "BaseBdev2", 00:12:54.773 "uuid": "e28f0089-febc-4b90-8bcc-ff6c1f1c08a6", 00:12:54.773 "is_configured": true, 00:12:54.773 "data_offset": 2048, 00:12:54.773 "data_size": 63488 00:12:54.773 }, 00:12:54.773 { 00:12:54.773 "name": "BaseBdev3", 00:12:54.773 "uuid": "f03121e0-674e-423d-86bd-01866a2daa0d", 00:12:54.773 "is_configured": true, 00:12:54.773 "data_offset": 2048, 00:12:54.773 "data_size": 63488 00:12:54.773 } 00:12:54.773 ] 00:12:54.773 }' 00:12:54.773 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.773 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.348 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:55.348 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.348 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.348 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.348 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.348 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.348 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:55.607 [2024-07-15 10:20:20.243358] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.607 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:55.607 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.607 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.607 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:55.865 [2024-07-15 10:20:20.582031] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.865 [2024-07-15 10:20:20.582064] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2329700 name Existed_Raid, state offline 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.865 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.123 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:56.123 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:56.123 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:56.123 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:56.123 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:56.123 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.381 BaseBdev2 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:56.381 10:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.381 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.639 [ 00:12:56.639 { 00:12:56.639 "name": "BaseBdev2", 00:12:56.639 "aliases": [ 00:12:56.639 "fbd8b136-2080-4996-b49c-0bd0cc281562" 00:12:56.639 ], 00:12:56.639 "product_name": "Malloc disk", 00:12:56.639 "block_size": 512, 00:12:56.639 "num_blocks": 65536, 00:12:56.639 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:12:56.639 "assigned_rate_limits": { 00:12:56.639 "rw_ios_per_sec": 0, 00:12:56.639 "rw_mbytes_per_sec": 0, 00:12:56.639 "r_mbytes_per_sec": 0, 00:12:56.639 "w_mbytes_per_sec": 0 00:12:56.639 }, 00:12:56.639 "claimed": false, 00:12:56.639 "zoned": false, 00:12:56.639 "supported_io_types": { 00:12:56.639 "read": true, 00:12:56.639 "write": true, 00:12:56.639 "unmap": true, 00:12:56.639 "flush": true, 00:12:56.639 "reset": true, 00:12:56.639 "nvme_admin": false, 00:12:56.639 "nvme_io": false, 00:12:56.639 "nvme_io_md": false, 00:12:56.639 "write_zeroes": true, 00:12:56.639 "zcopy": true, 00:12:56.639 "get_zone_info": false, 00:12:56.639 "zone_management": false, 00:12:56.639 "zone_append": false, 00:12:56.639 "compare": false, 00:12:56.639 "compare_and_write": false, 00:12:56.639 "abort": true, 00:12:56.639 "seek_hole": false, 00:12:56.639 "seek_data": false, 00:12:56.639 "copy": true, 00:12:56.639 "nvme_iov_md": false 00:12:56.639 }, 00:12:56.639 "memory_domains": [ 00:12:56.639 { 00:12:56.639 "dma_device_id": "system", 00:12:56.639 "dma_device_type": 1 00:12:56.639 }, 00:12:56.639 { 00:12:56.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.639 "dma_device_type": 2 00:12:56.639 } 00:12:56.639 ], 00:12:56.639 "driver_specific": {} 00:12:56.639 } 00:12:56.639 ] 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.639 BaseBdev3 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:56.639 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:56.897 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:56.897 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.897 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.155 [ 00:12:57.155 { 00:12:57.155 "name": "BaseBdev3", 00:12:57.155 "aliases": [ 00:12:57.155 "2baafdb5-c49f-4686-bf6e-6d546c2a3da9" 00:12:57.155 ], 00:12:57.155 "product_name": "Malloc disk", 00:12:57.155 "block_size": 512, 00:12:57.155 "num_blocks": 65536, 00:12:57.155 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:12:57.155 "assigned_rate_limits": { 00:12:57.155 "rw_ios_per_sec": 0, 00:12:57.155 "rw_mbytes_per_sec": 0, 00:12:57.155 "r_mbytes_per_sec": 0, 00:12:57.155 "w_mbytes_per_sec": 0 00:12:57.155 }, 00:12:57.155 "claimed": false, 00:12:57.155 "zoned": false, 00:12:57.155 "supported_io_types": { 00:12:57.155 "read": true, 00:12:57.155 "write": true, 00:12:57.155 "unmap": true, 00:12:57.155 "flush": true, 00:12:57.155 "reset": true, 00:12:57.155 "nvme_admin": false, 00:12:57.155 "nvme_io": false, 00:12:57.155 "nvme_io_md": false, 00:12:57.155 "write_zeroes": true, 00:12:57.155 "zcopy": true, 00:12:57.155 "get_zone_info": false, 00:12:57.156 "zone_management": false, 00:12:57.156 "zone_append": false, 00:12:57.156 "compare": false, 00:12:57.156 "compare_and_write": false, 00:12:57.156 "abort": true, 00:12:57.156 "seek_hole": false, 00:12:57.156 "seek_data": false, 00:12:57.156 "copy": true, 00:12:57.156 "nvme_iov_md": false 00:12:57.156 }, 00:12:57.156 "memory_domains": [ 00:12:57.156 { 00:12:57.156 "dma_device_id": "system", 00:12:57.156 "dma_device_type": 1 00:12:57.156 }, 00:12:57.156 { 00:12:57.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.156 "dma_device_type": 2 00:12:57.156 } 00:12:57.156 ], 00:12:57.156 "driver_specific": {} 00:12:57.156 } 00:12:57.156 ] 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:57.156 [2024-07-15 10:20:21.882492] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:57.156 [2024-07-15 10:20:21.882522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:57.156 [2024-07-15 10:20:21.882534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.156 [2024-07-15 10:20:21.883447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.156 10:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.414 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.414 "name": "Existed_Raid", 00:12:57.414 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:12:57.414 "strip_size_kb": 64, 00:12:57.414 "state": "configuring", 00:12:57.414 "raid_level": "concat", 00:12:57.414 "superblock": true, 00:12:57.414 "num_base_bdevs": 3, 00:12:57.414 "num_base_bdevs_discovered": 2, 00:12:57.414 "num_base_bdevs_operational": 3, 00:12:57.414 "base_bdevs_list": [ 00:12:57.414 { 00:12:57.414 "name": "BaseBdev1", 00:12:57.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.414 "is_configured": false, 00:12:57.414 "data_offset": 0, 00:12:57.414 "data_size": 0 00:12:57.414 }, 00:12:57.414 { 00:12:57.414 "name": "BaseBdev2", 00:12:57.414 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:12:57.414 "is_configured": true, 00:12:57.414 "data_offset": 2048, 00:12:57.414 "data_size": 63488 00:12:57.414 }, 00:12:57.414 { 00:12:57.414 "name": "BaseBdev3", 00:12:57.414 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:12:57.414 "is_configured": true, 00:12:57.414 "data_offset": 2048, 00:12:57.414 "data_size": 63488 00:12:57.414 } 00:12:57.414 ] 00:12:57.414 }' 00:12:57.414 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.414 10:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:57.980 [2024-07-15 10:20:22.692544] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.980 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.238 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:58.238 "name": "Existed_Raid", 00:12:58.238 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:12:58.238 "strip_size_kb": 64, 00:12:58.238 "state": "configuring", 00:12:58.238 "raid_level": "concat", 00:12:58.238 "superblock": true, 00:12:58.238 "num_base_bdevs": 3, 00:12:58.238 "num_base_bdevs_discovered": 1, 00:12:58.238 "num_base_bdevs_operational": 3, 00:12:58.238 "base_bdevs_list": [ 00:12:58.238 { 00:12:58.238 "name": "BaseBdev1", 00:12:58.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.238 "is_configured": false, 00:12:58.238 "data_offset": 0, 00:12:58.238 "data_size": 0 00:12:58.238 }, 00:12:58.238 { 00:12:58.238 "name": null, 00:12:58.238 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:12:58.238 "is_configured": false, 00:12:58.238 "data_offset": 2048, 00:12:58.238 "data_size": 63488 00:12:58.238 }, 00:12:58.238 { 00:12:58.238 "name": "BaseBdev3", 00:12:58.238 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:12:58.238 "is_configured": true, 00:12:58.238 "data_offset": 2048, 00:12:58.238 "data_size": 63488 00:12:58.238 } 00:12:58.238 ] 00:12:58.238 }' 00:12:58.238 10:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:58.238 10:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.804 10:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.804 10:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.804 10:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:58.804 10:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.061 [2024-07-15 10:20:23.701984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.061 BaseBdev1 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:59.061 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.319 10:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.319 [ 00:12:59.319 { 00:12:59.319 "name": "BaseBdev1", 00:12:59.319 "aliases": [ 00:12:59.319 "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e" 00:12:59.319 ], 00:12:59.319 "product_name": "Malloc disk", 00:12:59.319 "block_size": 512, 00:12:59.319 "num_blocks": 65536, 00:12:59.319 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:12:59.319 "assigned_rate_limits": { 00:12:59.319 "rw_ios_per_sec": 0, 00:12:59.319 "rw_mbytes_per_sec": 0, 00:12:59.319 "r_mbytes_per_sec": 0, 00:12:59.319 "w_mbytes_per_sec": 0 00:12:59.319 }, 00:12:59.319 "claimed": true, 00:12:59.319 "claim_type": "exclusive_write", 00:12:59.319 "zoned": false, 00:12:59.319 "supported_io_types": { 00:12:59.319 "read": true, 00:12:59.319 "write": true, 00:12:59.319 "unmap": true, 00:12:59.319 "flush": true, 00:12:59.319 "reset": true, 00:12:59.319 "nvme_admin": false, 00:12:59.319 "nvme_io": false, 00:12:59.319 "nvme_io_md": false, 00:12:59.319 "write_zeroes": true, 00:12:59.319 "zcopy": true, 00:12:59.319 "get_zone_info": false, 00:12:59.319 "zone_management": false, 00:12:59.319 "zone_append": false, 00:12:59.319 "compare": false, 00:12:59.319 "compare_and_write": false, 00:12:59.319 "abort": true, 00:12:59.319 "seek_hole": false, 00:12:59.319 "seek_data": false, 00:12:59.319 "copy": true, 00:12:59.319 "nvme_iov_md": false 00:12:59.319 }, 00:12:59.319 "memory_domains": [ 00:12:59.319 { 00:12:59.319 "dma_device_id": "system", 00:12:59.319 "dma_device_type": 1 00:12:59.319 }, 00:12:59.319 { 00:12:59.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.319 "dma_device_type": 2 00:12:59.319 } 00:12:59.319 ], 00:12:59.319 "driver_specific": {} 00:12:59.319 } 00:12:59.319 ] 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.319 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.577 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.577 "name": "Existed_Raid", 00:12:59.577 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:12:59.577 "strip_size_kb": 64, 00:12:59.577 "state": "configuring", 00:12:59.577 "raid_level": "concat", 00:12:59.577 "superblock": true, 00:12:59.577 "num_base_bdevs": 3, 00:12:59.577 "num_base_bdevs_discovered": 2, 00:12:59.577 "num_base_bdevs_operational": 3, 00:12:59.577 "base_bdevs_list": [ 00:12:59.577 { 00:12:59.577 "name": "BaseBdev1", 00:12:59.577 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:12:59.577 "is_configured": true, 00:12:59.577 "data_offset": 2048, 00:12:59.577 "data_size": 63488 00:12:59.577 }, 00:12:59.577 { 00:12:59.577 "name": null, 00:12:59.577 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:12:59.577 "is_configured": false, 00:12:59.577 "data_offset": 2048, 00:12:59.577 "data_size": 63488 00:12:59.577 }, 00:12:59.577 { 00:12:59.577 "name": "BaseBdev3", 00:12:59.577 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:12:59.577 "is_configured": true, 00:12:59.577 "data_offset": 2048, 00:12:59.577 "data_size": 63488 00:12:59.577 } 00:12:59.577 ] 00:12:59.577 }' 00:12:59.577 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.577 10:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.143 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.143 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.143 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:00.143 10:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:00.401 [2024-07-15 10:20:25.057479] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.401 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.660 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.660 "name": "Existed_Raid", 00:13:00.660 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:00.660 "strip_size_kb": 64, 00:13:00.660 "state": "configuring", 00:13:00.660 "raid_level": "concat", 00:13:00.660 "superblock": true, 00:13:00.660 "num_base_bdevs": 3, 00:13:00.660 "num_base_bdevs_discovered": 1, 00:13:00.660 "num_base_bdevs_operational": 3, 00:13:00.660 "base_bdevs_list": [ 00:13:00.660 { 00:13:00.660 "name": "BaseBdev1", 00:13:00.660 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:00.660 "is_configured": true, 00:13:00.660 "data_offset": 2048, 00:13:00.660 "data_size": 63488 00:13:00.660 }, 00:13:00.660 { 00:13:00.660 "name": null, 00:13:00.660 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:00.660 "is_configured": false, 00:13:00.660 "data_offset": 2048, 00:13:00.660 "data_size": 63488 00:13:00.660 }, 00:13:00.660 { 00:13:00.660 "name": null, 00:13:00.660 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:00.660 "is_configured": false, 00:13:00.660 "data_offset": 2048, 00:13:00.660 "data_size": 63488 00:13:00.660 } 00:13:00.660 ] 00:13:00.660 }' 00:13:00.660 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.660 10:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.226 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.226 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.226 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:01.226 10:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:01.483 [2024-07-15 10:20:26.048039] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.483 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.483 "name": "Existed_Raid", 00:13:01.483 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:01.483 "strip_size_kb": 64, 00:13:01.483 "state": "configuring", 00:13:01.483 "raid_level": "concat", 00:13:01.483 "superblock": true, 00:13:01.483 "num_base_bdevs": 3, 00:13:01.483 "num_base_bdevs_discovered": 2, 00:13:01.483 "num_base_bdevs_operational": 3, 00:13:01.483 "base_bdevs_list": [ 00:13:01.483 { 00:13:01.483 "name": "BaseBdev1", 00:13:01.483 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:01.483 "is_configured": true, 00:13:01.483 "data_offset": 2048, 00:13:01.483 "data_size": 63488 00:13:01.483 }, 00:13:01.483 { 00:13:01.483 "name": null, 00:13:01.483 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:01.483 "is_configured": false, 00:13:01.483 "data_offset": 2048, 00:13:01.483 "data_size": 63488 00:13:01.483 }, 00:13:01.483 { 00:13:01.483 "name": "BaseBdev3", 00:13:01.483 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:01.483 "is_configured": true, 00:13:01.483 "data_offset": 2048, 00:13:01.483 "data_size": 63488 00:13:01.483 } 00:13:01.483 ] 00:13:01.483 }' 00:13:01.484 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.484 10:20:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.050 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.050 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:02.308 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:02.308 10:20:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:02.308 [2024-07-15 10:20:27.018550] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.308 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.309 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.309 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.309 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.309 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.567 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.567 "name": "Existed_Raid", 00:13:02.567 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:02.567 "strip_size_kb": 64, 00:13:02.567 "state": "configuring", 00:13:02.567 "raid_level": "concat", 00:13:02.567 "superblock": true, 00:13:02.567 "num_base_bdevs": 3, 00:13:02.567 "num_base_bdevs_discovered": 1, 00:13:02.567 "num_base_bdevs_operational": 3, 00:13:02.567 "base_bdevs_list": [ 00:13:02.567 { 00:13:02.567 "name": null, 00:13:02.567 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:02.567 "is_configured": false, 00:13:02.567 "data_offset": 2048, 00:13:02.567 "data_size": 63488 00:13:02.567 }, 00:13:02.567 { 00:13:02.567 "name": null, 00:13:02.567 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:02.567 "is_configured": false, 00:13:02.567 "data_offset": 2048, 00:13:02.567 "data_size": 63488 00:13:02.567 }, 00:13:02.567 { 00:13:02.567 "name": "BaseBdev3", 00:13:02.567 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:02.567 "is_configured": true, 00:13:02.567 "data_offset": 2048, 00:13:02.567 "data_size": 63488 00:13:02.567 } 00:13:02.567 ] 00:13:02.567 }' 00:13:02.567 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.567 10:20:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.132 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.132 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:03.132 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:03.132 10:20:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:03.390 [2024-07-15 10:20:28.022805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.390 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.648 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.648 "name": "Existed_Raid", 00:13:03.648 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:03.648 "strip_size_kb": 64, 00:13:03.648 "state": "configuring", 00:13:03.648 "raid_level": "concat", 00:13:03.648 "superblock": true, 00:13:03.648 "num_base_bdevs": 3, 00:13:03.648 "num_base_bdevs_discovered": 2, 00:13:03.648 "num_base_bdevs_operational": 3, 00:13:03.648 "base_bdevs_list": [ 00:13:03.648 { 00:13:03.648 "name": null, 00:13:03.648 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:03.648 "is_configured": false, 00:13:03.648 "data_offset": 2048, 00:13:03.648 "data_size": 63488 00:13:03.648 }, 00:13:03.648 { 00:13:03.648 "name": "BaseBdev2", 00:13:03.648 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:03.648 "is_configured": true, 00:13:03.648 "data_offset": 2048, 00:13:03.648 "data_size": 63488 00:13:03.648 }, 00:13:03.648 { 00:13:03.648 "name": "BaseBdev3", 00:13:03.648 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:03.648 "is_configured": true, 00:13:03.648 "data_offset": 2048, 00:13:03.648 "data_size": 63488 00:13:03.648 } 00:13:03.648 ] 00:13:03.648 }' 00:13:03.648 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.648 10:20:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.215 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.215 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.215 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:04.215 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.215 10:20:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e 00:13:04.473 [2024-07-15 10:20:29.204553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:04.473 [2024-07-15 10:20:29.204671] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2327a80 00:13:04.473 [2024-07-15 10:20:29.204680] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:04.473 [2024-07-15 10:20:29.204800] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24dba50 00:13:04.473 [2024-07-15 10:20:29.204877] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2327a80 00:13:04.473 [2024-07-15 10:20:29.204883] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2327a80 00:13:04.473 [2024-07-15 10:20:29.204952] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.473 NewBaseBdev 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:04.473 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.731 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:04.989 [ 00:13:04.989 { 00:13:04.989 "name": "NewBaseBdev", 00:13:04.989 "aliases": [ 00:13:04.989 "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e" 00:13:04.989 ], 00:13:04.989 "product_name": "Malloc disk", 00:13:04.989 "block_size": 512, 00:13:04.989 "num_blocks": 65536, 00:13:04.989 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:04.989 "assigned_rate_limits": { 00:13:04.989 "rw_ios_per_sec": 0, 00:13:04.989 "rw_mbytes_per_sec": 0, 00:13:04.989 "r_mbytes_per_sec": 0, 00:13:04.989 "w_mbytes_per_sec": 0 00:13:04.989 }, 00:13:04.989 "claimed": true, 00:13:04.989 "claim_type": "exclusive_write", 00:13:04.989 "zoned": false, 00:13:04.989 "supported_io_types": { 00:13:04.989 "read": true, 00:13:04.989 "write": true, 00:13:04.990 "unmap": true, 00:13:04.990 "flush": true, 00:13:04.990 "reset": true, 00:13:04.990 "nvme_admin": false, 00:13:04.990 "nvme_io": false, 00:13:04.990 "nvme_io_md": false, 00:13:04.990 "write_zeroes": true, 00:13:04.990 "zcopy": true, 00:13:04.990 "get_zone_info": false, 00:13:04.990 "zone_management": false, 00:13:04.990 "zone_append": false, 00:13:04.990 "compare": false, 00:13:04.990 "compare_and_write": false, 00:13:04.990 "abort": true, 00:13:04.990 "seek_hole": false, 00:13:04.990 "seek_data": false, 00:13:04.990 "copy": true, 00:13:04.990 "nvme_iov_md": false 00:13:04.990 }, 00:13:04.990 "memory_domains": [ 00:13:04.990 { 00:13:04.990 "dma_device_id": "system", 00:13:04.990 "dma_device_type": 1 00:13:04.990 }, 00:13:04.990 { 00:13:04.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.990 "dma_device_type": 2 00:13:04.990 } 00:13:04.990 ], 00:13:04.990 "driver_specific": {} 00:13:04.990 } 00:13:04.990 ] 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.990 "name": "Existed_Raid", 00:13:04.990 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:04.990 "strip_size_kb": 64, 00:13:04.990 "state": "online", 00:13:04.990 "raid_level": "concat", 00:13:04.990 "superblock": true, 00:13:04.990 "num_base_bdevs": 3, 00:13:04.990 "num_base_bdevs_discovered": 3, 00:13:04.990 "num_base_bdevs_operational": 3, 00:13:04.990 "base_bdevs_list": [ 00:13:04.990 { 00:13:04.990 "name": "NewBaseBdev", 00:13:04.990 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:04.990 "is_configured": true, 00:13:04.990 "data_offset": 2048, 00:13:04.990 "data_size": 63488 00:13:04.990 }, 00:13:04.990 { 00:13:04.990 "name": "BaseBdev2", 00:13:04.990 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:04.990 "is_configured": true, 00:13:04.990 "data_offset": 2048, 00:13:04.990 "data_size": 63488 00:13:04.990 }, 00:13:04.990 { 00:13:04.990 "name": "BaseBdev3", 00:13:04.990 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:04.990 "is_configured": true, 00:13:04.990 "data_offset": 2048, 00:13:04.990 "data_size": 63488 00:13:04.990 } 00:13:04.990 ] 00:13:04.990 }' 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.990 10:20:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:05.557 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:05.816 [2024-07-15 10:20:30.371745] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.816 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:05.816 "name": "Existed_Raid", 00:13:05.816 "aliases": [ 00:13:05.816 "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f" 00:13:05.816 ], 00:13:05.816 "product_name": "Raid Volume", 00:13:05.816 "block_size": 512, 00:13:05.816 "num_blocks": 190464, 00:13:05.816 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:05.816 "assigned_rate_limits": { 00:13:05.816 "rw_ios_per_sec": 0, 00:13:05.816 "rw_mbytes_per_sec": 0, 00:13:05.816 "r_mbytes_per_sec": 0, 00:13:05.816 "w_mbytes_per_sec": 0 00:13:05.816 }, 00:13:05.816 "claimed": false, 00:13:05.816 "zoned": false, 00:13:05.816 "supported_io_types": { 00:13:05.816 "read": true, 00:13:05.816 "write": true, 00:13:05.816 "unmap": true, 00:13:05.816 "flush": true, 00:13:05.816 "reset": true, 00:13:05.816 "nvme_admin": false, 00:13:05.816 "nvme_io": false, 00:13:05.816 "nvme_io_md": false, 00:13:05.816 "write_zeroes": true, 00:13:05.816 "zcopy": false, 00:13:05.816 "get_zone_info": false, 00:13:05.816 "zone_management": false, 00:13:05.816 "zone_append": false, 00:13:05.816 "compare": false, 00:13:05.816 "compare_and_write": false, 00:13:05.816 "abort": false, 00:13:05.817 "seek_hole": false, 00:13:05.817 "seek_data": false, 00:13:05.817 "copy": false, 00:13:05.817 "nvme_iov_md": false 00:13:05.817 }, 00:13:05.817 "memory_domains": [ 00:13:05.817 { 00:13:05.817 "dma_device_id": "system", 00:13:05.817 "dma_device_type": 1 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.817 "dma_device_type": 2 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "dma_device_id": "system", 00:13:05.817 "dma_device_type": 1 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.817 "dma_device_type": 2 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "dma_device_id": "system", 00:13:05.817 "dma_device_type": 1 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.817 "dma_device_type": 2 00:13:05.817 } 00:13:05.817 ], 00:13:05.817 "driver_specific": { 00:13:05.817 "raid": { 00:13:05.817 "uuid": "9cdf81e1-2c22-4ee8-b3c6-38a21ca38e9f", 00:13:05.817 "strip_size_kb": 64, 00:13:05.817 "state": "online", 00:13:05.817 "raid_level": "concat", 00:13:05.817 "superblock": true, 00:13:05.817 "num_base_bdevs": 3, 00:13:05.817 "num_base_bdevs_discovered": 3, 00:13:05.817 "num_base_bdevs_operational": 3, 00:13:05.817 "base_bdevs_list": [ 00:13:05.817 { 00:13:05.817 "name": "NewBaseBdev", 00:13:05.817 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:05.817 "is_configured": true, 00:13:05.817 "data_offset": 2048, 00:13:05.817 "data_size": 63488 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "name": "BaseBdev2", 00:13:05.817 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:05.817 "is_configured": true, 00:13:05.817 "data_offset": 2048, 00:13:05.817 "data_size": 63488 00:13:05.817 }, 00:13:05.817 { 00:13:05.817 "name": "BaseBdev3", 00:13:05.817 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:05.817 "is_configured": true, 00:13:05.817 "data_offset": 2048, 00:13:05.817 "data_size": 63488 00:13:05.817 } 00:13:05.817 ] 00:13:05.817 } 00:13:05.817 } 00:13:05.817 }' 00:13:05.817 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.817 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:05.817 BaseBdev2 00:13:05.817 BaseBdev3' 00:13:05.817 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:05.817 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:05.817 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.075 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.075 "name": "NewBaseBdev", 00:13:06.075 "aliases": [ 00:13:06.075 "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e" 00:13:06.075 ], 00:13:06.075 "product_name": "Malloc disk", 00:13:06.075 "block_size": 512, 00:13:06.075 "num_blocks": 65536, 00:13:06.075 "uuid": "afa1350f-2b4d-4d13-a2d8-da7dbcb8bb6e", 00:13:06.075 "assigned_rate_limits": { 00:13:06.075 "rw_ios_per_sec": 0, 00:13:06.075 "rw_mbytes_per_sec": 0, 00:13:06.075 "r_mbytes_per_sec": 0, 00:13:06.075 "w_mbytes_per_sec": 0 00:13:06.075 }, 00:13:06.075 "claimed": true, 00:13:06.075 "claim_type": "exclusive_write", 00:13:06.075 "zoned": false, 00:13:06.075 "supported_io_types": { 00:13:06.075 "read": true, 00:13:06.075 "write": true, 00:13:06.075 "unmap": true, 00:13:06.075 "flush": true, 00:13:06.075 "reset": true, 00:13:06.075 "nvme_admin": false, 00:13:06.075 "nvme_io": false, 00:13:06.076 "nvme_io_md": false, 00:13:06.076 "write_zeroes": true, 00:13:06.076 "zcopy": true, 00:13:06.076 "get_zone_info": false, 00:13:06.076 "zone_management": false, 00:13:06.076 "zone_append": false, 00:13:06.076 "compare": false, 00:13:06.076 "compare_and_write": false, 00:13:06.076 "abort": true, 00:13:06.076 "seek_hole": false, 00:13:06.076 "seek_data": false, 00:13:06.076 "copy": true, 00:13:06.076 "nvme_iov_md": false 00:13:06.076 }, 00:13:06.076 "memory_domains": [ 00:13:06.076 { 00:13:06.076 "dma_device_id": "system", 00:13:06.076 "dma_device_type": 1 00:13:06.076 }, 00:13:06.076 { 00:13:06.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.076 "dma_device_type": 2 00:13:06.076 } 00:13:06.076 ], 00:13:06.076 "driver_specific": {} 00:13:06.076 }' 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.076 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.334 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.334 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:06.334 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.334 10:20:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:06.334 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.334 "name": "BaseBdev2", 00:13:06.334 "aliases": [ 00:13:06.334 "fbd8b136-2080-4996-b49c-0bd0cc281562" 00:13:06.334 ], 00:13:06.334 "product_name": "Malloc disk", 00:13:06.334 "block_size": 512, 00:13:06.334 "num_blocks": 65536, 00:13:06.334 "uuid": "fbd8b136-2080-4996-b49c-0bd0cc281562", 00:13:06.334 "assigned_rate_limits": { 00:13:06.335 "rw_ios_per_sec": 0, 00:13:06.335 "rw_mbytes_per_sec": 0, 00:13:06.335 "r_mbytes_per_sec": 0, 00:13:06.335 "w_mbytes_per_sec": 0 00:13:06.335 }, 00:13:06.335 "claimed": true, 00:13:06.335 "claim_type": "exclusive_write", 00:13:06.335 "zoned": false, 00:13:06.335 "supported_io_types": { 00:13:06.335 "read": true, 00:13:06.335 "write": true, 00:13:06.335 "unmap": true, 00:13:06.335 "flush": true, 00:13:06.335 "reset": true, 00:13:06.335 "nvme_admin": false, 00:13:06.335 "nvme_io": false, 00:13:06.335 "nvme_io_md": false, 00:13:06.335 "write_zeroes": true, 00:13:06.335 "zcopy": true, 00:13:06.335 "get_zone_info": false, 00:13:06.335 "zone_management": false, 00:13:06.335 "zone_append": false, 00:13:06.335 "compare": false, 00:13:06.335 "compare_and_write": false, 00:13:06.335 "abort": true, 00:13:06.335 "seek_hole": false, 00:13:06.335 "seek_data": false, 00:13:06.335 "copy": true, 00:13:06.335 "nvme_iov_md": false 00:13:06.335 }, 00:13:06.335 "memory_domains": [ 00:13:06.335 { 00:13:06.335 "dma_device_id": "system", 00:13:06.335 "dma_device_type": 1 00:13:06.335 }, 00:13:06.335 { 00:13:06.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.335 "dma_device_type": 2 00:13:06.335 } 00:13:06.335 ], 00:13:06.335 "driver_specific": {} 00:13:06.335 }' 00:13:06.335 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.335 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:06.593 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:06.852 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:06.852 "name": "BaseBdev3", 00:13:06.852 "aliases": [ 00:13:06.852 "2baafdb5-c49f-4686-bf6e-6d546c2a3da9" 00:13:06.852 ], 00:13:06.852 "product_name": "Malloc disk", 00:13:06.852 "block_size": 512, 00:13:06.852 "num_blocks": 65536, 00:13:06.852 "uuid": "2baafdb5-c49f-4686-bf6e-6d546c2a3da9", 00:13:06.852 "assigned_rate_limits": { 00:13:06.852 "rw_ios_per_sec": 0, 00:13:06.852 "rw_mbytes_per_sec": 0, 00:13:06.852 "r_mbytes_per_sec": 0, 00:13:06.852 "w_mbytes_per_sec": 0 00:13:06.852 }, 00:13:06.852 "claimed": true, 00:13:06.852 "claim_type": "exclusive_write", 00:13:06.852 "zoned": false, 00:13:06.852 "supported_io_types": { 00:13:06.852 "read": true, 00:13:06.852 "write": true, 00:13:06.852 "unmap": true, 00:13:06.852 "flush": true, 00:13:06.852 "reset": true, 00:13:06.852 "nvme_admin": false, 00:13:06.852 "nvme_io": false, 00:13:06.852 "nvme_io_md": false, 00:13:06.852 "write_zeroes": true, 00:13:06.853 "zcopy": true, 00:13:06.853 "get_zone_info": false, 00:13:06.853 "zone_management": false, 00:13:06.853 "zone_append": false, 00:13:06.853 "compare": false, 00:13:06.853 "compare_and_write": false, 00:13:06.853 "abort": true, 00:13:06.853 "seek_hole": false, 00:13:06.853 "seek_data": false, 00:13:06.853 "copy": true, 00:13:06.853 "nvme_iov_md": false 00:13:06.853 }, 00:13:06.853 "memory_domains": [ 00:13:06.853 { 00:13:06.853 "dma_device_id": "system", 00:13:06.853 "dma_device_type": 1 00:13:06.853 }, 00:13:06.853 { 00:13:06.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.853 "dma_device_type": 2 00:13:06.853 } 00:13:06.853 ], 00:13:06.853 "driver_specific": {} 00:13:06.853 }' 00:13:06.853 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.853 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:06.853 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:06.853 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:06.853 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:07.112 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:07.371 [2024-07-15 10:20:31.963668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.371 [2024-07-15 10:20:31.963688] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.371 [2024-07-15 10:20:31.963724] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.371 [2024-07-15 10:20:31.963756] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.371 [2024-07-15 10:20:31.963763] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2327a80 name Existed_Raid, state offline 00:13:07.371 10:20:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1781009 00:13:07.371 10:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1781009 ']' 00:13:07.371 10:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1781009 00:13:07.371 10:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:07.371 10:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.371 10:20:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781009 00:13:07.371 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.371 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.371 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781009' 00:13:07.371 killing process with pid 1781009 00:13:07.371 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1781009 00:13:07.371 [2024-07-15 10:20:32.019010] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.371 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1781009 00:13:07.371 [2024-07-15 10:20:32.041709] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.630 10:20:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:07.630 00:13:07.630 real 0m21.119s 00:13:07.630 user 0m38.500s 00:13:07.630 sys 0m4.074s 00:13:07.630 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.630 10:20:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.630 ************************************ 00:13:07.630 END TEST raid_state_function_test_sb 00:13:07.630 ************************************ 00:13:07.630 10:20:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:07.630 10:20:32 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:07.630 10:20:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:07.630 10:20:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.630 10:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.630 ************************************ 00:13:07.630 START TEST raid_superblock_test 00:13:07.630 ************************************ 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1785200 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1785200 /var/tmp/spdk-raid.sock 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1785200 ']' 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:07.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.630 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.630 [2024-07-15 10:20:32.347646] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:07.630 [2024-07-15 10:20:32.347693] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785200 ] 00:13:07.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.630 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:07.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.630 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:07.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.630 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:07.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.630 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:07.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.630 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:07.631 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:07.631 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:07.890 [2024-07-15 10:20:32.439644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.890 [2024-07-15 10:20:32.509059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.890 [2024-07-15 10:20:32.562238] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.890 [2024-07-15 10:20:32.562268] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:08.457 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:08.716 malloc1 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:08.716 [2024-07-15 10:20:33.462472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:08.716 [2024-07-15 10:20:33.462510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.716 [2024-07-15 10:20:33.462522] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x110f2f0 00:13:08.716 [2024-07-15 10:20:33.462530] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.716 [2024-07-15 10:20:33.463552] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.716 [2024-07-15 10:20:33.463575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:08.716 pt1 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:08.716 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:08.974 malloc2 00:13:08.974 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:09.232 [2024-07-15 10:20:33.798922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:09.232 [2024-07-15 10:20:33.798962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.232 [2024-07-15 10:20:33.798974] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11106d0 00:13:09.232 [2024-07-15 10:20:33.798981] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.232 [2024-07-15 10:20:33.799955] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.232 [2024-07-15 10:20:33.799975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:09.232 pt2 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:09.232 malloc3 00:13:09.232 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:09.490 [2024-07-15 10:20:34.143134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:09.490 [2024-07-15 10:20:34.143163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.490 [2024-07-15 10:20:34.143174] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a96b0 00:13:09.490 [2024-07-15 10:20:34.143182] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.491 [2024-07-15 10:20:34.144112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.491 [2024-07-15 10:20:34.144131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:09.491 pt3 00:13:09.491 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:09.491 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:09.491 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:09.749 [2024-07-15 10:20:34.315603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:09.749 [2024-07-15 10:20:34.316421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:09.749 [2024-07-15 10:20:34.316458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:09.749 [2024-07-15 10:20:34.316557] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a9cb0 00:13:09.749 [2024-07-15 10:20:34.316565] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:09.749 [2024-07-15 10:20:34.316690] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a8270 00:13:09.749 [2024-07-15 10:20:34.316784] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a9cb0 00:13:09.749 [2024-07-15 10:20:34.316791] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a9cb0 00:13:09.749 [2024-07-15 10:20:34.316851] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.749 "name": "raid_bdev1", 00:13:09.749 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:09.749 "strip_size_kb": 64, 00:13:09.749 "state": "online", 00:13:09.749 "raid_level": "concat", 00:13:09.749 "superblock": true, 00:13:09.749 "num_base_bdevs": 3, 00:13:09.749 "num_base_bdevs_discovered": 3, 00:13:09.749 "num_base_bdevs_operational": 3, 00:13:09.749 "base_bdevs_list": [ 00:13:09.749 { 00:13:09.749 "name": "pt1", 00:13:09.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:09.749 "is_configured": true, 00:13:09.749 "data_offset": 2048, 00:13:09.749 "data_size": 63488 00:13:09.749 }, 00:13:09.749 { 00:13:09.749 "name": "pt2", 00:13:09.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.749 "is_configured": true, 00:13:09.749 "data_offset": 2048, 00:13:09.749 "data_size": 63488 00:13:09.749 }, 00:13:09.749 { 00:13:09.749 "name": "pt3", 00:13:09.749 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.749 "is_configured": true, 00:13:09.749 "data_offset": 2048, 00:13:09.749 "data_size": 63488 00:13:09.749 } 00:13:09.749 ] 00:13:09.749 }' 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.749 10:20:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:10.317 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:10.665 [2024-07-15 10:20:35.137874] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.665 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:10.665 "name": "raid_bdev1", 00:13:10.665 "aliases": [ 00:13:10.665 "36202c33-a973-4a71-bdb2-486e8970099b" 00:13:10.665 ], 00:13:10.665 "product_name": "Raid Volume", 00:13:10.665 "block_size": 512, 00:13:10.665 "num_blocks": 190464, 00:13:10.665 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:10.665 "assigned_rate_limits": { 00:13:10.665 "rw_ios_per_sec": 0, 00:13:10.665 "rw_mbytes_per_sec": 0, 00:13:10.665 "r_mbytes_per_sec": 0, 00:13:10.665 "w_mbytes_per_sec": 0 00:13:10.665 }, 00:13:10.665 "claimed": false, 00:13:10.665 "zoned": false, 00:13:10.665 "supported_io_types": { 00:13:10.665 "read": true, 00:13:10.665 "write": true, 00:13:10.665 "unmap": true, 00:13:10.665 "flush": true, 00:13:10.665 "reset": true, 00:13:10.665 "nvme_admin": false, 00:13:10.665 "nvme_io": false, 00:13:10.665 "nvme_io_md": false, 00:13:10.665 "write_zeroes": true, 00:13:10.665 "zcopy": false, 00:13:10.665 "get_zone_info": false, 00:13:10.665 "zone_management": false, 00:13:10.665 "zone_append": false, 00:13:10.665 "compare": false, 00:13:10.665 "compare_and_write": false, 00:13:10.665 "abort": false, 00:13:10.665 "seek_hole": false, 00:13:10.665 "seek_data": false, 00:13:10.665 "copy": false, 00:13:10.665 "nvme_iov_md": false 00:13:10.665 }, 00:13:10.665 "memory_domains": [ 00:13:10.665 { 00:13:10.665 "dma_device_id": "system", 00:13:10.665 "dma_device_type": 1 00:13:10.665 }, 00:13:10.665 { 00:13:10.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.665 "dma_device_type": 2 00:13:10.665 }, 00:13:10.665 { 00:13:10.665 "dma_device_id": "system", 00:13:10.665 "dma_device_type": 1 00:13:10.665 }, 00:13:10.665 { 00:13:10.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.665 "dma_device_type": 2 00:13:10.665 }, 00:13:10.665 { 00:13:10.665 "dma_device_id": "system", 00:13:10.665 "dma_device_type": 1 00:13:10.665 }, 00:13:10.665 { 00:13:10.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.665 "dma_device_type": 2 00:13:10.665 } 00:13:10.665 ], 00:13:10.665 "driver_specific": { 00:13:10.665 "raid": { 00:13:10.665 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:10.665 "strip_size_kb": 64, 00:13:10.665 "state": "online", 00:13:10.665 "raid_level": "concat", 00:13:10.665 "superblock": true, 00:13:10.665 "num_base_bdevs": 3, 00:13:10.665 "num_base_bdevs_discovered": 3, 00:13:10.665 "num_base_bdevs_operational": 3, 00:13:10.665 "base_bdevs_list": [ 00:13:10.666 { 00:13:10.666 "name": "pt1", 00:13:10.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.666 "is_configured": true, 00:13:10.666 "data_offset": 2048, 00:13:10.666 "data_size": 63488 00:13:10.666 }, 00:13:10.666 { 00:13:10.666 "name": "pt2", 00:13:10.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.666 "is_configured": true, 00:13:10.666 "data_offset": 2048, 00:13:10.666 "data_size": 63488 00:13:10.666 }, 00:13:10.666 { 00:13:10.666 "name": "pt3", 00:13:10.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.666 "is_configured": true, 00:13:10.666 "data_offset": 2048, 00:13:10.666 "data_size": 63488 00:13:10.666 } 00:13:10.666 ] 00:13:10.666 } 00:13:10.666 } 00:13:10.666 }' 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:10.666 pt2 00:13:10.666 pt3' 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.666 "name": "pt1", 00:13:10.666 "aliases": [ 00:13:10.666 "00000000-0000-0000-0000-000000000001" 00:13:10.666 ], 00:13:10.666 "product_name": "passthru", 00:13:10.666 "block_size": 512, 00:13:10.666 "num_blocks": 65536, 00:13:10.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.666 "assigned_rate_limits": { 00:13:10.666 "rw_ios_per_sec": 0, 00:13:10.666 "rw_mbytes_per_sec": 0, 00:13:10.666 "r_mbytes_per_sec": 0, 00:13:10.666 "w_mbytes_per_sec": 0 00:13:10.666 }, 00:13:10.666 "claimed": true, 00:13:10.666 "claim_type": "exclusive_write", 00:13:10.666 "zoned": false, 00:13:10.666 "supported_io_types": { 00:13:10.666 "read": true, 00:13:10.666 "write": true, 00:13:10.666 "unmap": true, 00:13:10.666 "flush": true, 00:13:10.666 "reset": true, 00:13:10.666 "nvme_admin": false, 00:13:10.666 "nvme_io": false, 00:13:10.666 "nvme_io_md": false, 00:13:10.666 "write_zeroes": true, 00:13:10.666 "zcopy": true, 00:13:10.666 "get_zone_info": false, 00:13:10.666 "zone_management": false, 00:13:10.666 "zone_append": false, 00:13:10.666 "compare": false, 00:13:10.666 "compare_and_write": false, 00:13:10.666 "abort": true, 00:13:10.666 "seek_hole": false, 00:13:10.666 "seek_data": false, 00:13:10.666 "copy": true, 00:13:10.666 "nvme_iov_md": false 00:13:10.666 }, 00:13:10.666 "memory_domains": [ 00:13:10.666 { 00:13:10.666 "dma_device_id": "system", 00:13:10.666 "dma_device_type": 1 00:13:10.666 }, 00:13:10.666 { 00:13:10.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.666 "dma_device_type": 2 00:13:10.666 } 00:13:10.666 ], 00:13:10.666 "driver_specific": { 00:13:10.666 "passthru": { 00:13:10.666 "name": "pt1", 00:13:10.666 "base_bdev_name": "malloc1" 00:13:10.666 } 00:13:10.666 } 00:13:10.666 }' 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.666 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.924 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:11.182 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.182 "name": "pt2", 00:13:11.182 "aliases": [ 00:13:11.182 "00000000-0000-0000-0000-000000000002" 00:13:11.182 ], 00:13:11.182 "product_name": "passthru", 00:13:11.182 "block_size": 512, 00:13:11.182 "num_blocks": 65536, 00:13:11.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.182 "assigned_rate_limits": { 00:13:11.182 "rw_ios_per_sec": 0, 00:13:11.182 "rw_mbytes_per_sec": 0, 00:13:11.182 "r_mbytes_per_sec": 0, 00:13:11.182 "w_mbytes_per_sec": 0 00:13:11.182 }, 00:13:11.182 "claimed": true, 00:13:11.182 "claim_type": "exclusive_write", 00:13:11.182 "zoned": false, 00:13:11.182 "supported_io_types": { 00:13:11.182 "read": true, 00:13:11.182 "write": true, 00:13:11.182 "unmap": true, 00:13:11.182 "flush": true, 00:13:11.182 "reset": true, 00:13:11.182 "nvme_admin": false, 00:13:11.182 "nvme_io": false, 00:13:11.182 "nvme_io_md": false, 00:13:11.182 "write_zeroes": true, 00:13:11.182 "zcopy": true, 00:13:11.182 "get_zone_info": false, 00:13:11.182 "zone_management": false, 00:13:11.182 "zone_append": false, 00:13:11.183 "compare": false, 00:13:11.183 "compare_and_write": false, 00:13:11.183 "abort": true, 00:13:11.183 "seek_hole": false, 00:13:11.183 "seek_data": false, 00:13:11.183 "copy": true, 00:13:11.183 "nvme_iov_md": false 00:13:11.183 }, 00:13:11.183 "memory_domains": [ 00:13:11.183 { 00:13:11.183 "dma_device_id": "system", 00:13:11.183 "dma_device_type": 1 00:13:11.183 }, 00:13:11.183 { 00:13:11.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.183 "dma_device_type": 2 00:13:11.183 } 00:13:11.183 ], 00:13:11.183 "driver_specific": { 00:13:11.183 "passthru": { 00:13:11.183 "name": "pt2", 00:13:11.183 "base_bdev_name": "malloc2" 00:13:11.183 } 00:13:11.183 } 00:13:11.183 }' 00:13:11.183 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.183 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.183 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:11.183 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.440 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.440 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.441 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:11.441 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:11.441 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.698 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.698 "name": "pt3", 00:13:11.698 "aliases": [ 00:13:11.698 "00000000-0000-0000-0000-000000000003" 00:13:11.698 ], 00:13:11.698 "product_name": "passthru", 00:13:11.699 "block_size": 512, 00:13:11.699 "num_blocks": 65536, 00:13:11.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.699 "assigned_rate_limits": { 00:13:11.699 "rw_ios_per_sec": 0, 00:13:11.699 "rw_mbytes_per_sec": 0, 00:13:11.699 "r_mbytes_per_sec": 0, 00:13:11.699 "w_mbytes_per_sec": 0 00:13:11.699 }, 00:13:11.699 "claimed": true, 00:13:11.699 "claim_type": "exclusive_write", 00:13:11.699 "zoned": false, 00:13:11.699 "supported_io_types": { 00:13:11.699 "read": true, 00:13:11.699 "write": true, 00:13:11.699 "unmap": true, 00:13:11.699 "flush": true, 00:13:11.699 "reset": true, 00:13:11.699 "nvme_admin": false, 00:13:11.699 "nvme_io": false, 00:13:11.699 "nvme_io_md": false, 00:13:11.699 "write_zeroes": true, 00:13:11.699 "zcopy": true, 00:13:11.699 "get_zone_info": false, 00:13:11.699 "zone_management": false, 00:13:11.699 "zone_append": false, 00:13:11.699 "compare": false, 00:13:11.699 "compare_and_write": false, 00:13:11.699 "abort": true, 00:13:11.699 "seek_hole": false, 00:13:11.699 "seek_data": false, 00:13:11.699 "copy": true, 00:13:11.699 "nvme_iov_md": false 00:13:11.699 }, 00:13:11.699 "memory_domains": [ 00:13:11.699 { 00:13:11.699 "dma_device_id": "system", 00:13:11.699 "dma_device_type": 1 00:13:11.699 }, 00:13:11.699 { 00:13:11.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.699 "dma_device_type": 2 00:13:11.699 } 00:13:11.699 ], 00:13:11.699 "driver_specific": { 00:13:11.699 "passthru": { 00:13:11.699 "name": "pt3", 00:13:11.699 "base_bdev_name": "malloc3" 00:13:11.699 } 00:13:11.699 } 00:13:11.699 }' 00:13:11.699 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.699 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.699 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:11.699 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.699 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:11.957 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:12.217 [2024-07-15 10:20:36.822336] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.217 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=36202c33-a973-4a71-bdb2-486e8970099b 00:13:12.217 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 36202c33-a973-4a71-bdb2-486e8970099b ']' 00:13:12.217 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:12.217 [2024-07-15 10:20:36.990604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.217 [2024-07-15 10:20:36.990619] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.217 [2024-07-15 10:20:36.990653] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.217 [2024-07-15 10:20:36.990691] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.217 [2024-07-15 10:20:36.990699] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a9cb0 name raid_bdev1, state offline 00:13:12.476 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.476 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:12.476 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:12.476 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:12.476 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.476 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:12.735 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.735 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:12.735 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:12.735 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:12.993 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:12.993 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.252 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:13.253 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:13.253 10:20:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:13.253 [2024-07-15 10:20:38.013208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:13.253 [2024-07-15 10:20:38.014179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:13.253 [2024-07-15 10:20:38.014212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:13.253 [2024-07-15 10:20:38.014243] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:13.253 [2024-07-15 10:20:38.014273] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:13.253 [2024-07-15 10:20:38.014288] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:13.253 [2024-07-15 10:20:38.014300] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.253 [2024-07-15 10:20:38.014307] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12b2d50 name raid_bdev1, state configuring 00:13:13.253 request: 00:13:13.253 { 00:13:13.253 "name": "raid_bdev1", 00:13:13.253 "raid_level": "concat", 00:13:13.253 "base_bdevs": [ 00:13:13.253 "malloc1", 00:13:13.253 "malloc2", 00:13:13.253 "malloc3" 00:13:13.253 ], 00:13:13.253 "strip_size_kb": 64, 00:13:13.253 "superblock": false, 00:13:13.253 "method": "bdev_raid_create", 00:13:13.253 "req_id": 1 00:13:13.253 } 00:13:13.253 Got JSON-RPC error response 00:13:13.253 response: 00:13:13.253 { 00:13:13.253 "code": -17, 00:13:13.253 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:13.253 } 00:13:13.253 10:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:13.253 10:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:13.253 10:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:13.253 10:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:13.253 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.253 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:13.512 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:13.512 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:13.512 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:13.771 [2024-07-15 10:20:38.338155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:13.771 [2024-07-15 10:20:38.338192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.771 [2024-07-15 10:20:38.338205] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a6d00 00:13:13.771 [2024-07-15 10:20:38.338213] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.771 [2024-07-15 10:20:38.339374] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.771 [2024-07-15 10:20:38.339397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:13.771 [2024-07-15 10:20:38.339447] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:13.771 [2024-07-15 10:20:38.339465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:13.771 pt1 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.771 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.771 "name": "raid_bdev1", 00:13:13.771 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:13.771 "strip_size_kb": 64, 00:13:13.771 "state": "configuring", 00:13:13.771 "raid_level": "concat", 00:13:13.771 "superblock": true, 00:13:13.771 "num_base_bdevs": 3, 00:13:13.771 "num_base_bdevs_discovered": 1, 00:13:13.771 "num_base_bdevs_operational": 3, 00:13:13.771 "base_bdevs_list": [ 00:13:13.771 { 00:13:13.771 "name": "pt1", 00:13:13.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.771 "is_configured": true, 00:13:13.771 "data_offset": 2048, 00:13:13.771 "data_size": 63488 00:13:13.771 }, 00:13:13.771 { 00:13:13.771 "name": null, 00:13:13.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.771 "is_configured": false, 00:13:13.771 "data_offset": 2048, 00:13:13.771 "data_size": 63488 00:13:13.771 }, 00:13:13.771 { 00:13:13.771 "name": null, 00:13:13.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.771 "is_configured": false, 00:13:13.771 "data_offset": 2048, 00:13:13.771 "data_size": 63488 00:13:13.771 } 00:13:13.771 ] 00:13:13.771 }' 00:13:13.772 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.772 10:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.338 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:13:14.338 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:14.597 [2024-07-15 10:20:39.152239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:14.597 [2024-07-15 10:20:39.152271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.597 [2024-07-15 10:20:39.152285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12a7370 00:13:14.597 [2024-07-15 10:20:39.152292] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.597 [2024-07-15 10:20:39.152526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.597 [2024-07-15 10:20:39.152537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:14.597 [2024-07-15 10:20:39.152577] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:14.597 [2024-07-15 10:20:39.152589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:14.597 pt2 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:14.597 [2024-07-15 10:20:39.320692] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.597 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.857 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:14.857 "name": "raid_bdev1", 00:13:14.857 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:14.857 "strip_size_kb": 64, 00:13:14.857 "state": "configuring", 00:13:14.857 "raid_level": "concat", 00:13:14.857 "superblock": true, 00:13:14.857 "num_base_bdevs": 3, 00:13:14.857 "num_base_bdevs_discovered": 1, 00:13:14.857 "num_base_bdevs_operational": 3, 00:13:14.857 "base_bdevs_list": [ 00:13:14.857 { 00:13:14.857 "name": "pt1", 00:13:14.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.857 "is_configured": true, 00:13:14.857 "data_offset": 2048, 00:13:14.857 "data_size": 63488 00:13:14.857 }, 00:13:14.857 { 00:13:14.857 "name": null, 00:13:14.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.857 "is_configured": false, 00:13:14.857 "data_offset": 2048, 00:13:14.857 "data_size": 63488 00:13:14.857 }, 00:13:14.857 { 00:13:14.857 "name": null, 00:13:14.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.857 "is_configured": false, 00:13:14.857 "data_offset": 2048, 00:13:14.857 "data_size": 63488 00:13:14.857 } 00:13:14.857 ] 00:13:14.857 }' 00:13:14.857 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:14.857 10:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.431 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:15.431 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:15.431 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:15.431 [2024-07-15 10:20:40.114719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:15.431 [2024-07-15 10:20:40.114761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.431 [2024-07-15 10:20:40.114776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1107390 00:13:15.431 [2024-07-15 10:20:40.114785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.431 [2024-07-15 10:20:40.115049] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.431 [2024-07-15 10:20:40.115066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:15.431 [2024-07-15 10:20:40.115113] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:15.431 [2024-07-15 10:20:40.115126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:15.431 pt2 00:13:15.431 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:15.431 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:15.431 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:15.690 [2024-07-15 10:20:40.275133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:15.690 [2024-07-15 10:20:40.275165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.690 [2024-07-15 10:20:40.275178] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1106e20 00:13:15.690 [2024-07-15 10:20:40.275190] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.690 [2024-07-15 10:20:40.275429] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.690 [2024-07-15 10:20:40.275440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:15.690 [2024-07-15 10:20:40.275480] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:15.690 [2024-07-15 10:20:40.275492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:15.690 [2024-07-15 10:20:40.275567] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12a7de0 00:13:15.690 [2024-07-15 10:20:40.275574] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:15.690 [2024-07-15 10:20:40.275692] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12c19c0 00:13:15.690 [2024-07-15 10:20:40.275776] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12a7de0 00:13:15.690 [2024-07-15 10:20:40.275782] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12a7de0 00:13:15.690 [2024-07-15 10:20:40.275847] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.690 pt3 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.690 "name": "raid_bdev1", 00:13:15.690 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:15.690 "strip_size_kb": 64, 00:13:15.690 "state": "online", 00:13:15.690 "raid_level": "concat", 00:13:15.690 "superblock": true, 00:13:15.690 "num_base_bdevs": 3, 00:13:15.690 "num_base_bdevs_discovered": 3, 00:13:15.690 "num_base_bdevs_operational": 3, 00:13:15.690 "base_bdevs_list": [ 00:13:15.690 { 00:13:15.690 "name": "pt1", 00:13:15.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:15.690 "is_configured": true, 00:13:15.690 "data_offset": 2048, 00:13:15.690 "data_size": 63488 00:13:15.690 }, 00:13:15.690 { 00:13:15.690 "name": "pt2", 00:13:15.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.690 "is_configured": true, 00:13:15.690 "data_offset": 2048, 00:13:15.690 "data_size": 63488 00:13:15.690 }, 00:13:15.690 { 00:13:15.690 "name": "pt3", 00:13:15.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:15.690 "is_configured": true, 00:13:15.690 "data_offset": 2048, 00:13:15.690 "data_size": 63488 00:13:15.690 } 00:13:15.690 ] 00:13:15.690 }' 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.690 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:16.257 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:16.516 [2024-07-15 10:20:41.089420] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.516 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:16.516 "name": "raid_bdev1", 00:13:16.516 "aliases": [ 00:13:16.516 "36202c33-a973-4a71-bdb2-486e8970099b" 00:13:16.516 ], 00:13:16.516 "product_name": "Raid Volume", 00:13:16.516 "block_size": 512, 00:13:16.516 "num_blocks": 190464, 00:13:16.516 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:16.516 "assigned_rate_limits": { 00:13:16.516 "rw_ios_per_sec": 0, 00:13:16.516 "rw_mbytes_per_sec": 0, 00:13:16.516 "r_mbytes_per_sec": 0, 00:13:16.516 "w_mbytes_per_sec": 0 00:13:16.516 }, 00:13:16.516 "claimed": false, 00:13:16.516 "zoned": false, 00:13:16.516 "supported_io_types": { 00:13:16.516 "read": true, 00:13:16.516 "write": true, 00:13:16.516 "unmap": true, 00:13:16.516 "flush": true, 00:13:16.517 "reset": true, 00:13:16.517 "nvme_admin": false, 00:13:16.517 "nvme_io": false, 00:13:16.517 "nvme_io_md": false, 00:13:16.517 "write_zeroes": true, 00:13:16.517 "zcopy": false, 00:13:16.517 "get_zone_info": false, 00:13:16.517 "zone_management": false, 00:13:16.517 "zone_append": false, 00:13:16.517 "compare": false, 00:13:16.517 "compare_and_write": false, 00:13:16.517 "abort": false, 00:13:16.517 "seek_hole": false, 00:13:16.517 "seek_data": false, 00:13:16.517 "copy": false, 00:13:16.517 "nvme_iov_md": false 00:13:16.517 }, 00:13:16.517 "memory_domains": [ 00:13:16.517 { 00:13:16.517 "dma_device_id": "system", 00:13:16.517 "dma_device_type": 1 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.517 "dma_device_type": 2 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "dma_device_id": "system", 00:13:16.517 "dma_device_type": 1 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.517 "dma_device_type": 2 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "dma_device_id": "system", 00:13:16.517 "dma_device_type": 1 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.517 "dma_device_type": 2 00:13:16.517 } 00:13:16.517 ], 00:13:16.517 "driver_specific": { 00:13:16.517 "raid": { 00:13:16.517 "uuid": "36202c33-a973-4a71-bdb2-486e8970099b", 00:13:16.517 "strip_size_kb": 64, 00:13:16.517 "state": "online", 00:13:16.517 "raid_level": "concat", 00:13:16.517 "superblock": true, 00:13:16.517 "num_base_bdevs": 3, 00:13:16.517 "num_base_bdevs_discovered": 3, 00:13:16.517 "num_base_bdevs_operational": 3, 00:13:16.517 "base_bdevs_list": [ 00:13:16.517 { 00:13:16.517 "name": "pt1", 00:13:16.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:16.517 "is_configured": true, 00:13:16.517 "data_offset": 2048, 00:13:16.517 "data_size": 63488 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "name": "pt2", 00:13:16.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:16.517 "is_configured": true, 00:13:16.517 "data_offset": 2048, 00:13:16.517 "data_size": 63488 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "name": "pt3", 00:13:16.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:16.517 "is_configured": true, 00:13:16.517 "data_offset": 2048, 00:13:16.517 "data_size": 63488 00:13:16.517 } 00:13:16.517 ] 00:13:16.517 } 00:13:16.517 } 00:13:16.517 }' 00:13:16.517 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:16.517 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:16.517 pt2 00:13:16.517 pt3' 00:13:16.517 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:16.517 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:16.517 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:16.517 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:16.517 "name": "pt1", 00:13:16.517 "aliases": [ 00:13:16.517 "00000000-0000-0000-0000-000000000001" 00:13:16.517 ], 00:13:16.517 "product_name": "passthru", 00:13:16.517 "block_size": 512, 00:13:16.517 "num_blocks": 65536, 00:13:16.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:16.517 "assigned_rate_limits": { 00:13:16.517 "rw_ios_per_sec": 0, 00:13:16.517 "rw_mbytes_per_sec": 0, 00:13:16.517 "r_mbytes_per_sec": 0, 00:13:16.517 "w_mbytes_per_sec": 0 00:13:16.517 }, 00:13:16.517 "claimed": true, 00:13:16.517 "claim_type": "exclusive_write", 00:13:16.517 "zoned": false, 00:13:16.517 "supported_io_types": { 00:13:16.517 "read": true, 00:13:16.517 "write": true, 00:13:16.517 "unmap": true, 00:13:16.517 "flush": true, 00:13:16.517 "reset": true, 00:13:16.517 "nvme_admin": false, 00:13:16.517 "nvme_io": false, 00:13:16.517 "nvme_io_md": false, 00:13:16.517 "write_zeroes": true, 00:13:16.517 "zcopy": true, 00:13:16.517 "get_zone_info": false, 00:13:16.517 "zone_management": false, 00:13:16.517 "zone_append": false, 00:13:16.517 "compare": false, 00:13:16.517 "compare_and_write": false, 00:13:16.517 "abort": true, 00:13:16.517 "seek_hole": false, 00:13:16.517 "seek_data": false, 00:13:16.517 "copy": true, 00:13:16.517 "nvme_iov_md": false 00:13:16.517 }, 00:13:16.517 "memory_domains": [ 00:13:16.517 { 00:13:16.517 "dma_device_id": "system", 00:13:16.517 "dma_device_type": 1 00:13:16.517 }, 00:13:16.517 { 00:13:16.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.517 "dma_device_type": 2 00:13:16.517 } 00:13:16.517 ], 00:13:16.517 "driver_specific": { 00:13:16.517 "passthru": { 00:13:16.517 "name": "pt1", 00:13:16.517 "base_bdev_name": "malloc1" 00:13:16.517 } 00:13:16.517 } 00:13:16.517 }' 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:16.776 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.034 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.034 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.034 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:17.035 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:17.035 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:17.035 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:17.035 "name": "pt2", 00:13:17.035 "aliases": [ 00:13:17.035 "00000000-0000-0000-0000-000000000002" 00:13:17.035 ], 00:13:17.035 "product_name": "passthru", 00:13:17.035 "block_size": 512, 00:13:17.035 "num_blocks": 65536, 00:13:17.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.035 "assigned_rate_limits": { 00:13:17.035 "rw_ios_per_sec": 0, 00:13:17.035 "rw_mbytes_per_sec": 0, 00:13:17.035 "r_mbytes_per_sec": 0, 00:13:17.035 "w_mbytes_per_sec": 0 00:13:17.035 }, 00:13:17.035 "claimed": true, 00:13:17.035 "claim_type": "exclusive_write", 00:13:17.035 "zoned": false, 00:13:17.035 "supported_io_types": { 00:13:17.035 "read": true, 00:13:17.035 "write": true, 00:13:17.035 "unmap": true, 00:13:17.035 "flush": true, 00:13:17.035 "reset": true, 00:13:17.035 "nvme_admin": false, 00:13:17.035 "nvme_io": false, 00:13:17.035 "nvme_io_md": false, 00:13:17.035 "write_zeroes": true, 00:13:17.035 "zcopy": true, 00:13:17.035 "get_zone_info": false, 00:13:17.035 "zone_management": false, 00:13:17.035 "zone_append": false, 00:13:17.035 "compare": false, 00:13:17.035 "compare_and_write": false, 00:13:17.035 "abort": true, 00:13:17.035 "seek_hole": false, 00:13:17.035 "seek_data": false, 00:13:17.035 "copy": true, 00:13:17.035 "nvme_iov_md": false 00:13:17.035 }, 00:13:17.035 "memory_domains": [ 00:13:17.035 { 00:13:17.035 "dma_device_id": "system", 00:13:17.035 "dma_device_type": 1 00:13:17.035 }, 00:13:17.035 { 00:13:17.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.035 "dma_device_type": 2 00:13:17.035 } 00:13:17.035 ], 00:13:17.035 "driver_specific": { 00:13:17.035 "passthru": { 00:13:17.035 "name": "pt2", 00:13:17.035 "base_bdev_name": "malloc2" 00:13:17.035 } 00:13:17.035 } 00:13:17.035 }' 00:13:17.035 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.293 10:20:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.293 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:17.293 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.293 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:17.551 "name": "pt3", 00:13:17.551 "aliases": [ 00:13:17.551 "00000000-0000-0000-0000-000000000003" 00:13:17.551 ], 00:13:17.551 "product_name": "passthru", 00:13:17.551 "block_size": 512, 00:13:17.551 "num_blocks": 65536, 00:13:17.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:17.551 "assigned_rate_limits": { 00:13:17.551 "rw_ios_per_sec": 0, 00:13:17.551 "rw_mbytes_per_sec": 0, 00:13:17.551 "r_mbytes_per_sec": 0, 00:13:17.551 "w_mbytes_per_sec": 0 00:13:17.551 }, 00:13:17.551 "claimed": true, 00:13:17.551 "claim_type": "exclusive_write", 00:13:17.551 "zoned": false, 00:13:17.551 "supported_io_types": { 00:13:17.551 "read": true, 00:13:17.551 "write": true, 00:13:17.551 "unmap": true, 00:13:17.551 "flush": true, 00:13:17.551 "reset": true, 00:13:17.551 "nvme_admin": false, 00:13:17.551 "nvme_io": false, 00:13:17.551 "nvme_io_md": false, 00:13:17.551 "write_zeroes": true, 00:13:17.551 "zcopy": true, 00:13:17.551 "get_zone_info": false, 00:13:17.551 "zone_management": false, 00:13:17.551 "zone_append": false, 00:13:17.551 "compare": false, 00:13:17.551 "compare_and_write": false, 00:13:17.551 "abort": true, 00:13:17.551 "seek_hole": false, 00:13:17.551 "seek_data": false, 00:13:17.551 "copy": true, 00:13:17.551 "nvme_iov_md": false 00:13:17.551 }, 00:13:17.551 "memory_domains": [ 00:13:17.551 { 00:13:17.551 "dma_device_id": "system", 00:13:17.551 "dma_device_type": 1 00:13:17.551 }, 00:13:17.551 { 00:13:17.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.551 "dma_device_type": 2 00:13:17.551 } 00:13:17.551 ], 00:13:17.551 "driver_specific": { 00:13:17.551 "passthru": { 00:13:17.551 "name": "pt3", 00:13:17.551 "base_bdev_name": "malloc3" 00:13:17.551 } 00:13:17.551 } 00:13:17.551 }' 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:17.551 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:17.810 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:18.069 [2024-07-15 10:20:42.685536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 36202c33-a973-4a71-bdb2-486e8970099b '!=' 36202c33-a973-4a71-bdb2-486e8970099b ']' 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1785200 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1785200 ']' 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1785200 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1785200 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1785200' 00:13:18.069 killing process with pid 1785200 00:13:18.069 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1785200 00:13:18.069 [2024-07-15 10:20:42.737315] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.069 [2024-07-15 10:20:42.737355] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.069 [2024-07-15 10:20:42.737392] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.069 [2024-07-15 10:20:42.737400] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12a7de0 name raid_bdev1, state offline 00:13:18.070 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1785200 00:13:18.070 [2024-07-15 10:20:42.759865] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.329 10:20:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:18.329 00:13:18.329 real 0m10.639s 00:13:18.329 user 0m18.937s 00:13:18.329 sys 0m2.065s 00:13:18.329 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.329 10:20:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.329 ************************************ 00:13:18.329 END TEST raid_superblock_test 00:13:18.329 ************************************ 00:13:18.329 10:20:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:18.329 10:20:42 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:18.329 10:20:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:18.329 10:20:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.329 10:20:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.329 ************************************ 00:13:18.329 START TEST raid_read_error_test 00:13:18.329 ************************************ 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:18.329 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:18.329 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.YFb7EyBFaV 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1787360 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1787360 /var/tmp/spdk-raid.sock 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1787360 ']' 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:18.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.330 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.330 [2024-07-15 10:20:43.049713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:18.330 [2024-07-15 10:20:43.049755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1787360 ] 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:18.330 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:18.330 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:18.589 [2024-07-15 10:20:43.141593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.589 [2024-07-15 10:20:43.214845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.589 [2024-07-15 10:20:43.263803] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.589 [2024-07-15 10:20:43.263828] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.157 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.157 10:20:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:19.157 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:19.158 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:19.417 BaseBdev1_malloc 00:13:19.417 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:19.417 true 00:13:19.417 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:19.676 [2024-07-15 10:20:44.355985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:19.676 [2024-07-15 10:20:44.356017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.676 [2024-07-15 10:20:44.356035] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dd7190 00:13:19.676 [2024-07-15 10:20:44.356046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.676 [2024-07-15 10:20:44.357211] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.676 [2024-07-15 10:20:44.357234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:19.676 BaseBdev1 00:13:19.676 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:19.676 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:19.934 BaseBdev2_malloc 00:13:19.934 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:19.934 true 00:13:19.934 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:20.194 [2024-07-15 10:20:44.868859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:20.194 [2024-07-15 10:20:44.868892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.194 [2024-07-15 10:20:44.868917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ddbe20 00:13:20.194 [2024-07-15 10:20:44.868929] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.194 [2024-07-15 10:20:44.869952] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.194 [2024-07-15 10:20:44.869976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.194 BaseBdev2 00:13:20.194 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:20.194 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:20.453 BaseBdev3_malloc 00:13:20.453 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:20.453 true 00:13:20.453 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:20.712 [2024-07-15 10:20:45.357726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:20.712 [2024-07-15 10:20:45.357757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.712 [2024-07-15 10:20:45.357779] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ddcd90 00:13:20.712 [2024-07-15 10:20:45.357791] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.712 [2024-07-15 10:20:45.358842] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.712 [2024-07-15 10:20:45.358866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:20.712 BaseBdev3 00:13:20.712 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:20.972 [2024-07-15 10:20:45.526190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.972 [2024-07-15 10:20:45.527086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.972 [2024-07-15 10:20:45.527134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.972 [2024-07-15 10:20:45.527274] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ddeba0 00:13:20.972 [2024-07-15 10:20:45.527281] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:20.972 [2024-07-15 10:20:45.527409] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c32af0 00:13:20.972 [2024-07-15 10:20:45.527512] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ddeba0 00:13:20.972 [2024-07-15 10:20:45.527518] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ddeba0 00:13:20.972 [2024-07-15 10:20:45.527585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.972 "name": "raid_bdev1", 00:13:20.972 "uuid": "c135993e-b897-49ab-9a4f-5375a257fb8d", 00:13:20.972 "strip_size_kb": 64, 00:13:20.972 "state": "online", 00:13:20.972 "raid_level": "concat", 00:13:20.972 "superblock": true, 00:13:20.972 "num_base_bdevs": 3, 00:13:20.972 "num_base_bdevs_discovered": 3, 00:13:20.972 "num_base_bdevs_operational": 3, 00:13:20.972 "base_bdevs_list": [ 00:13:20.972 { 00:13:20.972 "name": "BaseBdev1", 00:13:20.972 "uuid": "c43addc6-8de0-57d4-b426-ae22fb305230", 00:13:20.972 "is_configured": true, 00:13:20.972 "data_offset": 2048, 00:13:20.972 "data_size": 63488 00:13:20.972 }, 00:13:20.972 { 00:13:20.972 "name": "BaseBdev2", 00:13:20.972 "uuid": "ef4e4a35-0caf-56b3-a66a-0bd16a4c6c9c", 00:13:20.972 "is_configured": true, 00:13:20.972 "data_offset": 2048, 00:13:20.972 "data_size": 63488 00:13:20.972 }, 00:13:20.972 { 00:13:20.972 "name": "BaseBdev3", 00:13:20.972 "uuid": "e54c04af-537e-5fcb-a9f5-683810676e91", 00:13:20.972 "is_configured": true, 00:13:20.972 "data_offset": 2048, 00:13:20.972 "data_size": 63488 00:13:20.972 } 00:13:20.972 ] 00:13:20.972 }' 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.972 10:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.540 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:21.540 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:21.540 [2024-07-15 10:20:46.188086] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19316c0 00:13:22.478 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.737 "name": "raid_bdev1", 00:13:22.737 "uuid": "c135993e-b897-49ab-9a4f-5375a257fb8d", 00:13:22.737 "strip_size_kb": 64, 00:13:22.737 "state": "online", 00:13:22.737 "raid_level": "concat", 00:13:22.737 "superblock": true, 00:13:22.737 "num_base_bdevs": 3, 00:13:22.737 "num_base_bdevs_discovered": 3, 00:13:22.737 "num_base_bdevs_operational": 3, 00:13:22.737 "base_bdevs_list": [ 00:13:22.737 { 00:13:22.737 "name": "BaseBdev1", 00:13:22.737 "uuid": "c43addc6-8de0-57d4-b426-ae22fb305230", 00:13:22.737 "is_configured": true, 00:13:22.737 "data_offset": 2048, 00:13:22.737 "data_size": 63488 00:13:22.737 }, 00:13:22.737 { 00:13:22.737 "name": "BaseBdev2", 00:13:22.737 "uuid": "ef4e4a35-0caf-56b3-a66a-0bd16a4c6c9c", 00:13:22.737 "is_configured": true, 00:13:22.737 "data_offset": 2048, 00:13:22.737 "data_size": 63488 00:13:22.737 }, 00:13:22.737 { 00:13:22.737 "name": "BaseBdev3", 00:13:22.737 "uuid": "e54c04af-537e-5fcb-a9f5-683810676e91", 00:13:22.737 "is_configured": true, 00:13:22.737 "data_offset": 2048, 00:13:22.737 "data_size": 63488 00:13:22.737 } 00:13:22.737 ] 00:13:22.737 }' 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.737 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.304 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:23.304 [2024-07-15 10:20:48.068374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.304 [2024-07-15 10:20:48.068403] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.304 [2024-07-15 10:20:48.070350] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.304 [2024-07-15 10:20:48.070375] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.304 [2024-07-15 10:20:48.070395] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.304 [2024-07-15 10:20:48.070402] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ddeba0 name raid_bdev1, state offline 00:13:23.304 0 00:13:23.304 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1787360 00:13:23.304 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1787360 ']' 00:13:23.304 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1787360 00:13:23.304 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1787360 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1787360' 00:13:23.563 killing process with pid 1787360 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1787360 00:13:23.563 [2024-07-15 10:20:48.142884] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1787360 00:13:23.563 [2024-07-15 10:20:48.160476] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.563 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.YFb7EyBFaV 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:13:23.564 00:13:23.564 real 0m5.355s 00:13:23.564 user 0m8.165s 00:13:23.564 sys 0m0.874s 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:23.564 10:20:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.564 ************************************ 00:13:23.564 END TEST raid_read_error_test 00:13:23.564 ************************************ 00:13:23.825 10:20:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:23.825 10:20:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:23.825 10:20:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:23.825 10:20:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.825 10:20:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.825 ************************************ 00:13:23.825 START TEST raid_write_error_test 00:13:23.825 ************************************ 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.M6JmFiMT8b 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1788378 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1788378 /var/tmp/spdk-raid.sock 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1788378 ']' 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:23.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:23.825 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.825 [2024-07-15 10:20:48.506535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:23.825 [2024-07-15 10:20:48.506581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788378 ] 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:23.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:23.825 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:23.825 [2024-07-15 10:20:48.598627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.143 [2024-07-15 10:20:48.672561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.143 [2024-07-15 10:20:48.723959] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.143 [2024-07-15 10:20:48.723986] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.710 10:20:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.710 10:20:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:24.710 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:24.710 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:24.710 BaseBdev1_malloc 00:13:24.710 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:24.969 true 00:13:24.969 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:25.228 [2024-07-15 10:20:49.763987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:25.228 [2024-07-15 10:20:49.764026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.228 [2024-07-15 10:20:49.764045] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eb9190 00:13:25.228 [2024-07-15 10:20:49.764058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.228 [2024-07-15 10:20:49.765255] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.228 [2024-07-15 10:20:49.765282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:25.228 BaseBdev1 00:13:25.228 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:25.228 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:25.228 BaseBdev2_malloc 00:13:25.228 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:25.487 true 00:13:25.488 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:25.747 [2024-07-15 10:20:50.292950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:25.747 [2024-07-15 10:20:50.292986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.747 [2024-07-15 10:20:50.293004] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ebde20 00:13:25.747 [2024-07-15 10:20:50.293014] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.747 [2024-07-15 10:20:50.294006] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.747 [2024-07-15 10:20:50.294045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:25.747 BaseBdev2 00:13:25.747 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:25.747 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:25.747 BaseBdev3_malloc 00:13:25.747 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:26.005 true 00:13:26.005 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:26.264 [2024-07-15 10:20:50.809665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:26.264 [2024-07-15 10:20:50.809699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.264 [2024-07-15 10:20:50.809719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ebed90 00:13:26.264 [2024-07-15 10:20:50.809732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.264 [2024-07-15 10:20:50.810742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.264 [2024-07-15 10:20:50.810767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:26.264 BaseBdev3 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:13:26.264 [2024-07-15 10:20:50.978125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.264 [2024-07-15 10:20:50.978940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.264 [2024-07-15 10:20:50.978987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.264 [2024-07-15 10:20:50.979122] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ec0ba0 00:13:26.264 [2024-07-15 10:20:50.979129] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:26.264 [2024-07-15 10:20:50.979248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d14af0 00:13:26.264 [2024-07-15 10:20:50.979344] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ec0ba0 00:13:26.264 [2024-07-15 10:20:50.979351] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1ec0ba0 00:13:26.264 [2024-07-15 10:20:50.979414] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:26.264 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.264 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.523 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.523 "name": "raid_bdev1", 00:13:26.523 "uuid": "29065203-2983-4ce8-bd04-878b5fc747ee", 00:13:26.523 "strip_size_kb": 64, 00:13:26.523 "state": "online", 00:13:26.523 "raid_level": "concat", 00:13:26.523 "superblock": true, 00:13:26.523 "num_base_bdevs": 3, 00:13:26.523 "num_base_bdevs_discovered": 3, 00:13:26.523 "num_base_bdevs_operational": 3, 00:13:26.523 "base_bdevs_list": [ 00:13:26.523 { 00:13:26.523 "name": "BaseBdev1", 00:13:26.523 "uuid": "cbfd8ffa-df1f-5ec5-9b7a-221f2e40029f", 00:13:26.523 "is_configured": true, 00:13:26.523 "data_offset": 2048, 00:13:26.523 "data_size": 63488 00:13:26.523 }, 00:13:26.523 { 00:13:26.523 "name": "BaseBdev2", 00:13:26.523 "uuid": "4a2108aa-f381-568d-820f-73af07ec5ff3", 00:13:26.523 "is_configured": true, 00:13:26.523 "data_offset": 2048, 00:13:26.523 "data_size": 63488 00:13:26.523 }, 00:13:26.523 { 00:13:26.523 "name": "BaseBdev3", 00:13:26.523 "uuid": "ce7eb952-14ce-5afc-80c3-362ffbf609c0", 00:13:26.523 "is_configured": true, 00:13:26.523 "data_offset": 2048, 00:13:26.523 "data_size": 63488 00:13:26.523 } 00:13:26.523 ] 00:13:26.523 }' 00:13:26.523 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.523 10:20:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.091 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:27.091 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:27.091 [2024-07-15 10:20:51.748420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a136c0 00:13:28.028 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.286 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.287 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.287 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.287 "name": "raid_bdev1", 00:13:28.287 "uuid": "29065203-2983-4ce8-bd04-878b5fc747ee", 00:13:28.287 "strip_size_kb": 64, 00:13:28.287 "state": "online", 00:13:28.287 "raid_level": "concat", 00:13:28.287 "superblock": true, 00:13:28.287 "num_base_bdevs": 3, 00:13:28.287 "num_base_bdevs_discovered": 3, 00:13:28.287 "num_base_bdevs_operational": 3, 00:13:28.287 "base_bdevs_list": [ 00:13:28.287 { 00:13:28.287 "name": "BaseBdev1", 00:13:28.287 "uuid": "cbfd8ffa-df1f-5ec5-9b7a-221f2e40029f", 00:13:28.287 "is_configured": true, 00:13:28.287 "data_offset": 2048, 00:13:28.287 "data_size": 63488 00:13:28.287 }, 00:13:28.287 { 00:13:28.287 "name": "BaseBdev2", 00:13:28.287 "uuid": "4a2108aa-f381-568d-820f-73af07ec5ff3", 00:13:28.287 "is_configured": true, 00:13:28.287 "data_offset": 2048, 00:13:28.287 "data_size": 63488 00:13:28.287 }, 00:13:28.287 { 00:13:28.287 "name": "BaseBdev3", 00:13:28.287 "uuid": "ce7eb952-14ce-5afc-80c3-362ffbf609c0", 00:13:28.287 "is_configured": true, 00:13:28.287 "data_offset": 2048, 00:13:28.287 "data_size": 63488 00:13:28.287 } 00:13:28.287 ] 00:13:28.287 }' 00:13:28.287 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.287 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.853 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:29.112 [2024-07-15 10:20:53.676054] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.112 [2024-07-15 10:20:53.676087] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.112 [2024-07-15 10:20:53.678033] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.112 [2024-07-15 10:20:53.678061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.112 [2024-07-15 10:20:53.678082] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.112 [2024-07-15 10:20:53.678089] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec0ba0 name raid_bdev1, state offline 00:13:29.112 0 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1788378 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1788378 ']' 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1788378 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1788378 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1788378' 00:13:29.112 killing process with pid 1788378 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1788378 00:13:29.112 [2024-07-15 10:20:53.747490] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.112 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1788378 00:13:29.112 [2024-07-15 10:20:53.764500] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.M6JmFiMT8b 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:13:29.370 00:13:29.370 real 0m5.517s 00:13:29.370 user 0m8.419s 00:13:29.370 sys 0m0.977s 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.370 10:20:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 ************************************ 00:13:29.370 END TEST raid_write_error_test 00:13:29.370 ************************************ 00:13:29.370 10:20:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:29.370 10:20:53 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:29.370 10:20:53 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:29.370 10:20:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:29.370 10:20:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.370 10:20:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 ************************************ 00:13:29.370 START TEST raid_state_function_test 00:13:29.370 ************************************ 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1789416 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1789416' 00:13:29.370 Process raid pid: 1789416 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1789416 /var/tmp/spdk-raid.sock 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1789416 ']' 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:29.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.370 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.370 [2024-07-15 10:20:54.107284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:29.371 [2024-07-15 10:20:54.107331] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.371 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:29.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:29.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.630 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:29.630 [2024-07-15 10:20:54.200471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.630 [2024-07-15 10:20:54.273474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.630 [2024-07-15 10:20:54.323207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.630 [2024-07-15 10:20:54.323231] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.198 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.198 10:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:30.198 10:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:30.485 [2024-07-15 10:20:55.037993] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.485 [2024-07-15 10:20:55.038025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.485 [2024-07-15 10:20:55.038034] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:30.485 [2024-07-15 10:20:55.038044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:30.485 [2024-07-15 10:20:55.038051] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:30.485 [2024-07-15 10:20:55.038059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.485 "name": "Existed_Raid", 00:13:30.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.485 "strip_size_kb": 0, 00:13:30.485 "state": "configuring", 00:13:30.485 "raid_level": "raid1", 00:13:30.485 "superblock": false, 00:13:30.485 "num_base_bdevs": 3, 00:13:30.485 "num_base_bdevs_discovered": 0, 00:13:30.485 "num_base_bdevs_operational": 3, 00:13:30.485 "base_bdevs_list": [ 00:13:30.485 { 00:13:30.485 "name": "BaseBdev1", 00:13:30.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.485 "is_configured": false, 00:13:30.485 "data_offset": 0, 00:13:30.485 "data_size": 0 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "name": "BaseBdev2", 00:13:30.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.485 "is_configured": false, 00:13:30.485 "data_offset": 0, 00:13:30.485 "data_size": 0 00:13:30.485 }, 00:13:30.485 { 00:13:30.485 "name": "BaseBdev3", 00:13:30.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.485 "is_configured": false, 00:13:30.485 "data_offset": 0, 00:13:30.485 "data_size": 0 00:13:30.485 } 00:13:30.485 ] 00:13:30.485 }' 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.485 10:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.051 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:31.309 [2024-07-15 10:20:55.868066] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.309 [2024-07-15 10:20:55.868088] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf65f40 name Existed_Raid, state configuring 00:13:31.309 10:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:31.309 [2024-07-15 10:20:56.044520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.309 [2024-07-15 10:20:56.044540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.309 [2024-07-15 10:20:56.044549] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:31.309 [2024-07-15 10:20:56.044559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:31.309 [2024-07-15 10:20:56.044565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:31.309 [2024-07-15 10:20:56.044573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:31.309 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.568 [2024-07-15 10:20:56.229413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.568 BaseBdev1 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.568 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.827 [ 00:13:31.827 { 00:13:31.827 "name": "BaseBdev1", 00:13:31.827 "aliases": [ 00:13:31.827 "c89be0b6-0b57-45ba-aba9-20afc7195a35" 00:13:31.827 ], 00:13:31.827 "product_name": "Malloc disk", 00:13:31.827 "block_size": 512, 00:13:31.827 "num_blocks": 65536, 00:13:31.827 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:31.827 "assigned_rate_limits": { 00:13:31.827 "rw_ios_per_sec": 0, 00:13:31.827 "rw_mbytes_per_sec": 0, 00:13:31.827 "r_mbytes_per_sec": 0, 00:13:31.827 "w_mbytes_per_sec": 0 00:13:31.827 }, 00:13:31.827 "claimed": true, 00:13:31.827 "claim_type": "exclusive_write", 00:13:31.827 "zoned": false, 00:13:31.827 "supported_io_types": { 00:13:31.827 "read": true, 00:13:31.827 "write": true, 00:13:31.827 "unmap": true, 00:13:31.827 "flush": true, 00:13:31.827 "reset": true, 00:13:31.827 "nvme_admin": false, 00:13:31.827 "nvme_io": false, 00:13:31.827 "nvme_io_md": false, 00:13:31.827 "write_zeroes": true, 00:13:31.827 "zcopy": true, 00:13:31.827 "get_zone_info": false, 00:13:31.827 "zone_management": false, 00:13:31.827 "zone_append": false, 00:13:31.827 "compare": false, 00:13:31.827 "compare_and_write": false, 00:13:31.827 "abort": true, 00:13:31.827 "seek_hole": false, 00:13:31.827 "seek_data": false, 00:13:31.827 "copy": true, 00:13:31.827 "nvme_iov_md": false 00:13:31.827 }, 00:13:31.827 "memory_domains": [ 00:13:31.827 { 00:13:31.827 "dma_device_id": "system", 00:13:31.827 "dma_device_type": 1 00:13:31.827 }, 00:13:31.827 { 00:13:31.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.827 "dma_device_type": 2 00:13:31.827 } 00:13:31.827 ], 00:13:31.827 "driver_specific": {} 00:13:31.827 } 00:13:31.827 ] 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.827 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.085 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.085 "name": "Existed_Raid", 00:13:32.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.085 "strip_size_kb": 0, 00:13:32.085 "state": "configuring", 00:13:32.085 "raid_level": "raid1", 00:13:32.085 "superblock": false, 00:13:32.085 "num_base_bdevs": 3, 00:13:32.085 "num_base_bdevs_discovered": 1, 00:13:32.085 "num_base_bdevs_operational": 3, 00:13:32.085 "base_bdevs_list": [ 00:13:32.085 { 00:13:32.085 "name": "BaseBdev1", 00:13:32.085 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:32.085 "is_configured": true, 00:13:32.085 "data_offset": 0, 00:13:32.085 "data_size": 65536 00:13:32.085 }, 00:13:32.085 { 00:13:32.085 "name": "BaseBdev2", 00:13:32.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.085 "is_configured": false, 00:13:32.085 "data_offset": 0, 00:13:32.085 "data_size": 0 00:13:32.085 }, 00:13:32.085 { 00:13:32.085 "name": "BaseBdev3", 00:13:32.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.085 "is_configured": false, 00:13:32.085 "data_offset": 0, 00:13:32.085 "data_size": 0 00:13:32.085 } 00:13:32.085 ] 00:13:32.085 }' 00:13:32.085 10:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.085 10:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.651 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:32.651 [2024-07-15 10:20:57.404427] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.651 [2024-07-15 10:20:57.404451] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf65810 name Existed_Raid, state configuring 00:13:32.651 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:32.910 [2024-07-15 10:20:57.568867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.910 [2024-07-15 10:20:57.569886] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.910 [2024-07-15 10:20:57.569919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.910 [2024-07-15 10:20:57.569929] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:32.910 [2024-07-15 10:20:57.569938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.910 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.168 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.168 "name": "Existed_Raid", 00:13:33.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.168 "strip_size_kb": 0, 00:13:33.168 "state": "configuring", 00:13:33.168 "raid_level": "raid1", 00:13:33.168 "superblock": false, 00:13:33.168 "num_base_bdevs": 3, 00:13:33.168 "num_base_bdevs_discovered": 1, 00:13:33.168 "num_base_bdevs_operational": 3, 00:13:33.168 "base_bdevs_list": [ 00:13:33.168 { 00:13:33.168 "name": "BaseBdev1", 00:13:33.168 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:33.168 "is_configured": true, 00:13:33.168 "data_offset": 0, 00:13:33.168 "data_size": 65536 00:13:33.168 }, 00:13:33.168 { 00:13:33.168 "name": "BaseBdev2", 00:13:33.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.168 "is_configured": false, 00:13:33.168 "data_offset": 0, 00:13:33.168 "data_size": 0 00:13:33.168 }, 00:13:33.168 { 00:13:33.168 "name": "BaseBdev3", 00:13:33.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.168 "is_configured": false, 00:13:33.168 "data_offset": 0, 00:13:33.168 "data_size": 0 00:13:33.168 } 00:13:33.168 ] 00:13:33.168 }' 00:13:33.168 10:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.168 10:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:33.735 [2024-07-15 10:20:58.409729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.735 BaseBdev2 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:33.735 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:33.994 [ 00:13:33.994 { 00:13:33.994 "name": "BaseBdev2", 00:13:33.994 "aliases": [ 00:13:33.994 "835580cc-0327-4112-9ccd-9164430bfd16" 00:13:33.994 ], 00:13:33.994 "product_name": "Malloc disk", 00:13:33.994 "block_size": 512, 00:13:33.994 "num_blocks": 65536, 00:13:33.994 "uuid": "835580cc-0327-4112-9ccd-9164430bfd16", 00:13:33.994 "assigned_rate_limits": { 00:13:33.994 "rw_ios_per_sec": 0, 00:13:33.994 "rw_mbytes_per_sec": 0, 00:13:33.994 "r_mbytes_per_sec": 0, 00:13:33.994 "w_mbytes_per_sec": 0 00:13:33.994 }, 00:13:33.994 "claimed": true, 00:13:33.994 "claim_type": "exclusive_write", 00:13:33.994 "zoned": false, 00:13:33.994 "supported_io_types": { 00:13:33.994 "read": true, 00:13:33.994 "write": true, 00:13:33.994 "unmap": true, 00:13:33.994 "flush": true, 00:13:33.994 "reset": true, 00:13:33.994 "nvme_admin": false, 00:13:33.994 "nvme_io": false, 00:13:33.994 "nvme_io_md": false, 00:13:33.994 "write_zeroes": true, 00:13:33.994 "zcopy": true, 00:13:33.994 "get_zone_info": false, 00:13:33.994 "zone_management": false, 00:13:33.994 "zone_append": false, 00:13:33.994 "compare": false, 00:13:33.994 "compare_and_write": false, 00:13:33.994 "abort": true, 00:13:33.994 "seek_hole": false, 00:13:33.994 "seek_data": false, 00:13:33.994 "copy": true, 00:13:33.994 "nvme_iov_md": false 00:13:33.994 }, 00:13:33.994 "memory_domains": [ 00:13:33.994 { 00:13:33.994 "dma_device_id": "system", 00:13:33.994 "dma_device_type": 1 00:13:33.994 }, 00:13:33.994 { 00:13:33.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.994 "dma_device_type": 2 00:13:33.994 } 00:13:33.994 ], 00:13:33.994 "driver_specific": {} 00:13:33.994 } 00:13:33.994 ] 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.994 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.252 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.252 "name": "Existed_Raid", 00:13:34.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.252 "strip_size_kb": 0, 00:13:34.252 "state": "configuring", 00:13:34.252 "raid_level": "raid1", 00:13:34.252 "superblock": false, 00:13:34.252 "num_base_bdevs": 3, 00:13:34.252 "num_base_bdevs_discovered": 2, 00:13:34.252 "num_base_bdevs_operational": 3, 00:13:34.252 "base_bdevs_list": [ 00:13:34.252 { 00:13:34.252 "name": "BaseBdev1", 00:13:34.252 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:34.252 "is_configured": true, 00:13:34.252 "data_offset": 0, 00:13:34.252 "data_size": 65536 00:13:34.252 }, 00:13:34.252 { 00:13:34.252 "name": "BaseBdev2", 00:13:34.252 "uuid": "835580cc-0327-4112-9ccd-9164430bfd16", 00:13:34.252 "is_configured": true, 00:13:34.252 "data_offset": 0, 00:13:34.252 "data_size": 65536 00:13:34.252 }, 00:13:34.252 { 00:13:34.252 "name": "BaseBdev3", 00:13:34.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.252 "is_configured": false, 00:13:34.252 "data_offset": 0, 00:13:34.252 "data_size": 0 00:13:34.252 } 00:13:34.252 ] 00:13:34.252 }' 00:13:34.252 10:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.252 10:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:34.819 [2024-07-15 10:20:59.575366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.819 [2024-07-15 10:20:59.575395] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf66700 00:13:34.819 [2024-07-15 10:20:59.575400] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:34.819 [2024-07-15 10:20:59.575523] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf663d0 00:13:34.819 [2024-07-15 10:20:59.575603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf66700 00:13:34.819 [2024-07-15 10:20:59.575609] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf66700 00:13:34.819 [2024-07-15 10:20:59.575718] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.819 BaseBdev3 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:34.819 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.078 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.337 [ 00:13:35.337 { 00:13:35.337 "name": "BaseBdev3", 00:13:35.337 "aliases": [ 00:13:35.337 "0fbb3a75-a470-48ca-8320-7cc23ce77587" 00:13:35.337 ], 00:13:35.337 "product_name": "Malloc disk", 00:13:35.337 "block_size": 512, 00:13:35.337 "num_blocks": 65536, 00:13:35.337 "uuid": "0fbb3a75-a470-48ca-8320-7cc23ce77587", 00:13:35.337 "assigned_rate_limits": { 00:13:35.337 "rw_ios_per_sec": 0, 00:13:35.337 "rw_mbytes_per_sec": 0, 00:13:35.337 "r_mbytes_per_sec": 0, 00:13:35.337 "w_mbytes_per_sec": 0 00:13:35.337 }, 00:13:35.337 "claimed": true, 00:13:35.337 "claim_type": "exclusive_write", 00:13:35.337 "zoned": false, 00:13:35.337 "supported_io_types": { 00:13:35.337 "read": true, 00:13:35.337 "write": true, 00:13:35.337 "unmap": true, 00:13:35.337 "flush": true, 00:13:35.337 "reset": true, 00:13:35.337 "nvme_admin": false, 00:13:35.337 "nvme_io": false, 00:13:35.337 "nvme_io_md": false, 00:13:35.337 "write_zeroes": true, 00:13:35.337 "zcopy": true, 00:13:35.337 "get_zone_info": false, 00:13:35.337 "zone_management": false, 00:13:35.337 "zone_append": false, 00:13:35.337 "compare": false, 00:13:35.337 "compare_and_write": false, 00:13:35.337 "abort": true, 00:13:35.337 "seek_hole": false, 00:13:35.337 "seek_data": false, 00:13:35.337 "copy": true, 00:13:35.337 "nvme_iov_md": false 00:13:35.337 }, 00:13:35.337 "memory_domains": [ 00:13:35.337 { 00:13:35.337 "dma_device_id": "system", 00:13:35.337 "dma_device_type": 1 00:13:35.337 }, 00:13:35.337 { 00:13:35.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.337 "dma_device_type": 2 00:13:35.337 } 00:13:35.337 ], 00:13:35.337 "driver_specific": {} 00:13:35.337 } 00:13:35.337 ] 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.337 10:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.337 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.337 "name": "Existed_Raid", 00:13:35.337 "uuid": "7b4c31f9-476c-4d94-8280-6442969e0cc4", 00:13:35.337 "strip_size_kb": 0, 00:13:35.337 "state": "online", 00:13:35.337 "raid_level": "raid1", 00:13:35.337 "superblock": false, 00:13:35.337 "num_base_bdevs": 3, 00:13:35.337 "num_base_bdevs_discovered": 3, 00:13:35.337 "num_base_bdevs_operational": 3, 00:13:35.337 "base_bdevs_list": [ 00:13:35.337 { 00:13:35.337 "name": "BaseBdev1", 00:13:35.337 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:35.337 "is_configured": true, 00:13:35.337 "data_offset": 0, 00:13:35.337 "data_size": 65536 00:13:35.337 }, 00:13:35.337 { 00:13:35.337 "name": "BaseBdev2", 00:13:35.337 "uuid": "835580cc-0327-4112-9ccd-9164430bfd16", 00:13:35.337 "is_configured": true, 00:13:35.337 "data_offset": 0, 00:13:35.337 "data_size": 65536 00:13:35.337 }, 00:13:35.337 { 00:13:35.337 "name": "BaseBdev3", 00:13:35.337 "uuid": "0fbb3a75-a470-48ca-8320-7cc23ce77587", 00:13:35.337 "is_configured": true, 00:13:35.337 "data_offset": 0, 00:13:35.337 "data_size": 65536 00:13:35.337 } 00:13:35.337 ] 00:13:35.337 }' 00:13:35.337 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.337 10:21:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:35.906 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:36.166 [2024-07-15 10:21:00.734566] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.166 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:36.166 "name": "Existed_Raid", 00:13:36.166 "aliases": [ 00:13:36.166 "7b4c31f9-476c-4d94-8280-6442969e0cc4" 00:13:36.166 ], 00:13:36.166 "product_name": "Raid Volume", 00:13:36.166 "block_size": 512, 00:13:36.166 "num_blocks": 65536, 00:13:36.166 "uuid": "7b4c31f9-476c-4d94-8280-6442969e0cc4", 00:13:36.166 "assigned_rate_limits": { 00:13:36.166 "rw_ios_per_sec": 0, 00:13:36.166 "rw_mbytes_per_sec": 0, 00:13:36.166 "r_mbytes_per_sec": 0, 00:13:36.166 "w_mbytes_per_sec": 0 00:13:36.166 }, 00:13:36.166 "claimed": false, 00:13:36.166 "zoned": false, 00:13:36.166 "supported_io_types": { 00:13:36.166 "read": true, 00:13:36.166 "write": true, 00:13:36.166 "unmap": false, 00:13:36.166 "flush": false, 00:13:36.166 "reset": true, 00:13:36.166 "nvme_admin": false, 00:13:36.166 "nvme_io": false, 00:13:36.166 "nvme_io_md": false, 00:13:36.166 "write_zeroes": true, 00:13:36.166 "zcopy": false, 00:13:36.166 "get_zone_info": false, 00:13:36.166 "zone_management": false, 00:13:36.166 "zone_append": false, 00:13:36.166 "compare": false, 00:13:36.166 "compare_and_write": false, 00:13:36.166 "abort": false, 00:13:36.166 "seek_hole": false, 00:13:36.166 "seek_data": false, 00:13:36.166 "copy": false, 00:13:36.166 "nvme_iov_md": false 00:13:36.166 }, 00:13:36.166 "memory_domains": [ 00:13:36.166 { 00:13:36.166 "dma_device_id": "system", 00:13:36.166 "dma_device_type": 1 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.166 "dma_device_type": 2 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "dma_device_id": "system", 00:13:36.166 "dma_device_type": 1 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.166 "dma_device_type": 2 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "dma_device_id": "system", 00:13:36.166 "dma_device_type": 1 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.166 "dma_device_type": 2 00:13:36.166 } 00:13:36.166 ], 00:13:36.166 "driver_specific": { 00:13:36.166 "raid": { 00:13:36.166 "uuid": "7b4c31f9-476c-4d94-8280-6442969e0cc4", 00:13:36.166 "strip_size_kb": 0, 00:13:36.166 "state": "online", 00:13:36.166 "raid_level": "raid1", 00:13:36.166 "superblock": false, 00:13:36.166 "num_base_bdevs": 3, 00:13:36.166 "num_base_bdevs_discovered": 3, 00:13:36.166 "num_base_bdevs_operational": 3, 00:13:36.166 "base_bdevs_list": [ 00:13:36.166 { 00:13:36.166 "name": "BaseBdev1", 00:13:36.166 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:36.166 "is_configured": true, 00:13:36.166 "data_offset": 0, 00:13:36.166 "data_size": 65536 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "name": "BaseBdev2", 00:13:36.166 "uuid": "835580cc-0327-4112-9ccd-9164430bfd16", 00:13:36.166 "is_configured": true, 00:13:36.166 "data_offset": 0, 00:13:36.166 "data_size": 65536 00:13:36.166 }, 00:13:36.166 { 00:13:36.166 "name": "BaseBdev3", 00:13:36.166 "uuid": "0fbb3a75-a470-48ca-8320-7cc23ce77587", 00:13:36.166 "is_configured": true, 00:13:36.166 "data_offset": 0, 00:13:36.166 "data_size": 65536 00:13:36.166 } 00:13:36.166 ] 00:13:36.166 } 00:13:36.166 } 00:13:36.166 }' 00:13:36.166 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.166 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:36.166 BaseBdev2 00:13:36.166 BaseBdev3' 00:13:36.166 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.166 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:36.166 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.426 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.426 "name": "BaseBdev1", 00:13:36.426 "aliases": [ 00:13:36.426 "c89be0b6-0b57-45ba-aba9-20afc7195a35" 00:13:36.426 ], 00:13:36.426 "product_name": "Malloc disk", 00:13:36.426 "block_size": 512, 00:13:36.426 "num_blocks": 65536, 00:13:36.426 "uuid": "c89be0b6-0b57-45ba-aba9-20afc7195a35", 00:13:36.426 "assigned_rate_limits": { 00:13:36.426 "rw_ios_per_sec": 0, 00:13:36.426 "rw_mbytes_per_sec": 0, 00:13:36.426 "r_mbytes_per_sec": 0, 00:13:36.426 "w_mbytes_per_sec": 0 00:13:36.426 }, 00:13:36.426 "claimed": true, 00:13:36.426 "claim_type": "exclusive_write", 00:13:36.426 "zoned": false, 00:13:36.426 "supported_io_types": { 00:13:36.426 "read": true, 00:13:36.426 "write": true, 00:13:36.426 "unmap": true, 00:13:36.426 "flush": true, 00:13:36.426 "reset": true, 00:13:36.426 "nvme_admin": false, 00:13:36.426 "nvme_io": false, 00:13:36.426 "nvme_io_md": false, 00:13:36.426 "write_zeroes": true, 00:13:36.426 "zcopy": true, 00:13:36.426 "get_zone_info": false, 00:13:36.426 "zone_management": false, 00:13:36.426 "zone_append": false, 00:13:36.426 "compare": false, 00:13:36.426 "compare_and_write": false, 00:13:36.426 "abort": true, 00:13:36.426 "seek_hole": false, 00:13:36.426 "seek_data": false, 00:13:36.426 "copy": true, 00:13:36.426 "nvme_iov_md": false 00:13:36.426 }, 00:13:36.426 "memory_domains": [ 00:13:36.426 { 00:13:36.426 "dma_device_id": "system", 00:13:36.426 "dma_device_type": 1 00:13:36.426 }, 00:13:36.426 { 00:13:36.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.426 "dma_device_type": 2 00:13:36.426 } 00:13:36.426 ], 00:13:36.426 "driver_specific": {} 00:13:36.426 }' 00:13:36.426 10:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.426 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.685 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.685 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:36.685 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:36.685 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:36.685 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:36.685 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:36.685 "name": "BaseBdev2", 00:13:36.685 "aliases": [ 00:13:36.685 "835580cc-0327-4112-9ccd-9164430bfd16" 00:13:36.685 ], 00:13:36.686 "product_name": "Malloc disk", 00:13:36.686 "block_size": 512, 00:13:36.686 "num_blocks": 65536, 00:13:36.686 "uuid": "835580cc-0327-4112-9ccd-9164430bfd16", 00:13:36.686 "assigned_rate_limits": { 00:13:36.686 "rw_ios_per_sec": 0, 00:13:36.686 "rw_mbytes_per_sec": 0, 00:13:36.686 "r_mbytes_per_sec": 0, 00:13:36.686 "w_mbytes_per_sec": 0 00:13:36.686 }, 00:13:36.686 "claimed": true, 00:13:36.686 "claim_type": "exclusive_write", 00:13:36.686 "zoned": false, 00:13:36.686 "supported_io_types": { 00:13:36.686 "read": true, 00:13:36.686 "write": true, 00:13:36.686 "unmap": true, 00:13:36.686 "flush": true, 00:13:36.686 "reset": true, 00:13:36.686 "nvme_admin": false, 00:13:36.686 "nvme_io": false, 00:13:36.686 "nvme_io_md": false, 00:13:36.686 "write_zeroes": true, 00:13:36.686 "zcopy": true, 00:13:36.686 "get_zone_info": false, 00:13:36.686 "zone_management": false, 00:13:36.686 "zone_append": false, 00:13:36.686 "compare": false, 00:13:36.686 "compare_and_write": false, 00:13:36.686 "abort": true, 00:13:36.686 "seek_hole": false, 00:13:36.686 "seek_data": false, 00:13:36.686 "copy": true, 00:13:36.686 "nvme_iov_md": false 00:13:36.686 }, 00:13:36.686 "memory_domains": [ 00:13:36.686 { 00:13:36.686 "dma_device_id": "system", 00:13:36.686 "dma_device_type": 1 00:13:36.686 }, 00:13:36.686 { 00:13:36.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.686 "dma_device_type": 2 00:13:36.686 } 00:13:36.686 ], 00:13:36.686 "driver_specific": {} 00:13:36.686 }' 00:13:36.686 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:36.944 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.203 "name": "BaseBdev3", 00:13:37.203 "aliases": [ 00:13:37.203 "0fbb3a75-a470-48ca-8320-7cc23ce77587" 00:13:37.203 ], 00:13:37.203 "product_name": "Malloc disk", 00:13:37.203 "block_size": 512, 00:13:37.203 "num_blocks": 65536, 00:13:37.203 "uuid": "0fbb3a75-a470-48ca-8320-7cc23ce77587", 00:13:37.203 "assigned_rate_limits": { 00:13:37.203 "rw_ios_per_sec": 0, 00:13:37.203 "rw_mbytes_per_sec": 0, 00:13:37.203 "r_mbytes_per_sec": 0, 00:13:37.203 "w_mbytes_per_sec": 0 00:13:37.203 }, 00:13:37.203 "claimed": true, 00:13:37.203 "claim_type": "exclusive_write", 00:13:37.203 "zoned": false, 00:13:37.203 "supported_io_types": { 00:13:37.203 "read": true, 00:13:37.203 "write": true, 00:13:37.203 "unmap": true, 00:13:37.203 "flush": true, 00:13:37.203 "reset": true, 00:13:37.203 "nvme_admin": false, 00:13:37.203 "nvme_io": false, 00:13:37.203 "nvme_io_md": false, 00:13:37.203 "write_zeroes": true, 00:13:37.203 "zcopy": true, 00:13:37.203 "get_zone_info": false, 00:13:37.203 "zone_management": false, 00:13:37.203 "zone_append": false, 00:13:37.203 "compare": false, 00:13:37.203 "compare_and_write": false, 00:13:37.203 "abort": true, 00:13:37.203 "seek_hole": false, 00:13:37.203 "seek_data": false, 00:13:37.203 "copy": true, 00:13:37.203 "nvme_iov_md": false 00:13:37.203 }, 00:13:37.203 "memory_domains": [ 00:13:37.203 { 00:13:37.203 "dma_device_id": "system", 00:13:37.203 "dma_device_type": 1 00:13:37.203 }, 00:13:37.203 { 00:13:37.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.203 "dma_device_type": 2 00:13:37.203 } 00:13:37.203 ], 00:13:37.203 "driver_specific": {} 00:13:37.203 }' 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.203 10:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.491 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:37.750 [2024-07-15 10:21:02.402706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.750 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.010 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.010 "name": "Existed_Raid", 00:13:38.010 "uuid": "7b4c31f9-476c-4d94-8280-6442969e0cc4", 00:13:38.010 "strip_size_kb": 0, 00:13:38.010 "state": "online", 00:13:38.010 "raid_level": "raid1", 00:13:38.010 "superblock": false, 00:13:38.010 "num_base_bdevs": 3, 00:13:38.010 "num_base_bdevs_discovered": 2, 00:13:38.010 "num_base_bdevs_operational": 2, 00:13:38.010 "base_bdevs_list": [ 00:13:38.010 { 00:13:38.010 "name": null, 00:13:38.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.010 "is_configured": false, 00:13:38.010 "data_offset": 0, 00:13:38.010 "data_size": 65536 00:13:38.010 }, 00:13:38.010 { 00:13:38.010 "name": "BaseBdev2", 00:13:38.010 "uuid": "835580cc-0327-4112-9ccd-9164430bfd16", 00:13:38.010 "is_configured": true, 00:13:38.010 "data_offset": 0, 00:13:38.010 "data_size": 65536 00:13:38.010 }, 00:13:38.010 { 00:13:38.010 "name": "BaseBdev3", 00:13:38.010 "uuid": "0fbb3a75-a470-48ca-8320-7cc23ce77587", 00:13:38.010 "is_configured": true, 00:13:38.010 "data_offset": 0, 00:13:38.010 "data_size": 65536 00:13:38.010 } 00:13:38.010 ] 00:13:38.010 }' 00:13:38.010 10:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.010 10:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.269 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:38.269 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:38.529 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.529 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:38.529 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:38.529 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.529 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:38.804 [2024-07-15 10:21:03.390022] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:38.804 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:39.072 [2024-07-15 10:21:03.744602] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:39.072 [2024-07-15 10:21:03.744657] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.072 [2024-07-15 10:21:03.754520] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.072 [2024-07-15 10:21:03.754560] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.072 [2024-07-15 10:21:03.754568] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf66700 name Existed_Raid, state offline 00:13:39.072 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:39.072 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:39.072 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.072 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:39.332 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:39.332 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:39.332 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:39.332 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:39.332 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:39.332 10:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:39.332 BaseBdev2 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:39.332 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:39.590 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:39.850 [ 00:13:39.850 { 00:13:39.850 "name": "BaseBdev2", 00:13:39.850 "aliases": [ 00:13:39.850 "6b1adac6-c6e2-467f-81a3-9de6baf44a97" 00:13:39.850 ], 00:13:39.850 "product_name": "Malloc disk", 00:13:39.850 "block_size": 512, 00:13:39.850 "num_blocks": 65536, 00:13:39.850 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:39.850 "assigned_rate_limits": { 00:13:39.850 "rw_ios_per_sec": 0, 00:13:39.850 "rw_mbytes_per_sec": 0, 00:13:39.850 "r_mbytes_per_sec": 0, 00:13:39.850 "w_mbytes_per_sec": 0 00:13:39.850 }, 00:13:39.850 "claimed": false, 00:13:39.850 "zoned": false, 00:13:39.850 "supported_io_types": { 00:13:39.850 "read": true, 00:13:39.850 "write": true, 00:13:39.850 "unmap": true, 00:13:39.850 "flush": true, 00:13:39.850 "reset": true, 00:13:39.850 "nvme_admin": false, 00:13:39.850 "nvme_io": false, 00:13:39.850 "nvme_io_md": false, 00:13:39.850 "write_zeroes": true, 00:13:39.850 "zcopy": true, 00:13:39.850 "get_zone_info": false, 00:13:39.850 "zone_management": false, 00:13:39.850 "zone_append": false, 00:13:39.850 "compare": false, 00:13:39.850 "compare_and_write": false, 00:13:39.850 "abort": true, 00:13:39.850 "seek_hole": false, 00:13:39.850 "seek_data": false, 00:13:39.850 "copy": true, 00:13:39.850 "nvme_iov_md": false 00:13:39.850 }, 00:13:39.850 "memory_domains": [ 00:13:39.850 { 00:13:39.850 "dma_device_id": "system", 00:13:39.850 "dma_device_type": 1 00:13:39.850 }, 00:13:39.850 { 00:13:39.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.850 "dma_device_type": 2 00:13:39.850 } 00:13:39.850 ], 00:13:39.850 "driver_specific": {} 00:13:39.850 } 00:13:39.850 ] 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:39.850 BaseBdev3 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:39.850 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:40.110 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.370 [ 00:13:40.370 { 00:13:40.370 "name": "BaseBdev3", 00:13:40.370 "aliases": [ 00:13:40.370 "997d11c4-dab1-488a-bba8-3fd3af9e6c5e" 00:13:40.370 ], 00:13:40.370 "product_name": "Malloc disk", 00:13:40.370 "block_size": 512, 00:13:40.370 "num_blocks": 65536, 00:13:40.370 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:40.370 "assigned_rate_limits": { 00:13:40.370 "rw_ios_per_sec": 0, 00:13:40.370 "rw_mbytes_per_sec": 0, 00:13:40.370 "r_mbytes_per_sec": 0, 00:13:40.370 "w_mbytes_per_sec": 0 00:13:40.370 }, 00:13:40.370 "claimed": false, 00:13:40.370 "zoned": false, 00:13:40.370 "supported_io_types": { 00:13:40.370 "read": true, 00:13:40.370 "write": true, 00:13:40.370 "unmap": true, 00:13:40.370 "flush": true, 00:13:40.370 "reset": true, 00:13:40.370 "nvme_admin": false, 00:13:40.370 "nvme_io": false, 00:13:40.370 "nvme_io_md": false, 00:13:40.370 "write_zeroes": true, 00:13:40.370 "zcopy": true, 00:13:40.370 "get_zone_info": false, 00:13:40.370 "zone_management": false, 00:13:40.370 "zone_append": false, 00:13:40.370 "compare": false, 00:13:40.370 "compare_and_write": false, 00:13:40.370 "abort": true, 00:13:40.370 "seek_hole": false, 00:13:40.370 "seek_data": false, 00:13:40.370 "copy": true, 00:13:40.370 "nvme_iov_md": false 00:13:40.370 }, 00:13:40.370 "memory_domains": [ 00:13:40.370 { 00:13:40.370 "dma_device_id": "system", 00:13:40.370 "dma_device_type": 1 00:13:40.370 }, 00:13:40.370 { 00:13:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.370 "dma_device_type": 2 00:13:40.370 } 00:13:40.370 ], 00:13:40.370 "driver_specific": {} 00:13:40.370 } 00:13:40.370 ] 00:13:40.370 10:21:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:40.370 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:40.370 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:40.370 10:21:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:40.370 [2024-07-15 10:21:05.093470] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.370 [2024-07-15 10:21:05.093504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.370 [2024-07-15 10:21:05.093520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.370 [2024-07-15 10:21:05.094540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.370 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.629 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:40.629 "name": "Existed_Raid", 00:13:40.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.629 "strip_size_kb": 0, 00:13:40.629 "state": "configuring", 00:13:40.629 "raid_level": "raid1", 00:13:40.629 "superblock": false, 00:13:40.629 "num_base_bdevs": 3, 00:13:40.629 "num_base_bdevs_discovered": 2, 00:13:40.629 "num_base_bdevs_operational": 3, 00:13:40.629 "base_bdevs_list": [ 00:13:40.629 { 00:13:40.629 "name": "BaseBdev1", 00:13:40.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.629 "is_configured": false, 00:13:40.629 "data_offset": 0, 00:13:40.629 "data_size": 0 00:13:40.629 }, 00:13:40.629 { 00:13:40.629 "name": "BaseBdev2", 00:13:40.629 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:40.629 "is_configured": true, 00:13:40.629 "data_offset": 0, 00:13:40.629 "data_size": 65536 00:13:40.629 }, 00:13:40.629 { 00:13:40.629 "name": "BaseBdev3", 00:13:40.629 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:40.629 "is_configured": true, 00:13:40.629 "data_offset": 0, 00:13:40.629 "data_size": 65536 00:13:40.629 } 00:13:40.629 ] 00:13:40.629 }' 00:13:40.629 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:40.629 10:21:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:41.198 [2024-07-15 10:21:05.891546] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.198 10:21:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.457 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.457 "name": "Existed_Raid", 00:13:41.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.457 "strip_size_kb": 0, 00:13:41.457 "state": "configuring", 00:13:41.457 "raid_level": "raid1", 00:13:41.457 "superblock": false, 00:13:41.457 "num_base_bdevs": 3, 00:13:41.457 "num_base_bdevs_discovered": 1, 00:13:41.457 "num_base_bdevs_operational": 3, 00:13:41.457 "base_bdevs_list": [ 00:13:41.457 { 00:13:41.457 "name": "BaseBdev1", 00:13:41.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.457 "is_configured": false, 00:13:41.457 "data_offset": 0, 00:13:41.457 "data_size": 0 00:13:41.457 }, 00:13:41.457 { 00:13:41.457 "name": null, 00:13:41.457 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:41.457 "is_configured": false, 00:13:41.457 "data_offset": 0, 00:13:41.457 "data_size": 65536 00:13:41.457 }, 00:13:41.457 { 00:13:41.457 "name": "BaseBdev3", 00:13:41.457 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:41.457 "is_configured": true, 00:13:41.457 "data_offset": 0, 00:13:41.457 "data_size": 65536 00:13:41.457 } 00:13:41.457 ] 00:13:41.457 }' 00:13:41.457 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.457 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.025 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.025 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:42.025 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:42.025 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:42.285 [2024-07-15 10:21:06.888947] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.285 BaseBdev1 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:42.285 10:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:42.545 [ 00:13:42.545 { 00:13:42.545 "name": "BaseBdev1", 00:13:42.545 "aliases": [ 00:13:42.545 "a268fb79-6a06-4574-a077-4bf9e7a0ee01" 00:13:42.545 ], 00:13:42.545 "product_name": "Malloc disk", 00:13:42.545 "block_size": 512, 00:13:42.545 "num_blocks": 65536, 00:13:42.545 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:42.545 "assigned_rate_limits": { 00:13:42.545 "rw_ios_per_sec": 0, 00:13:42.545 "rw_mbytes_per_sec": 0, 00:13:42.545 "r_mbytes_per_sec": 0, 00:13:42.545 "w_mbytes_per_sec": 0 00:13:42.545 }, 00:13:42.545 "claimed": true, 00:13:42.545 "claim_type": "exclusive_write", 00:13:42.545 "zoned": false, 00:13:42.545 "supported_io_types": { 00:13:42.545 "read": true, 00:13:42.545 "write": true, 00:13:42.545 "unmap": true, 00:13:42.545 "flush": true, 00:13:42.545 "reset": true, 00:13:42.545 "nvme_admin": false, 00:13:42.545 "nvme_io": false, 00:13:42.545 "nvme_io_md": false, 00:13:42.545 "write_zeroes": true, 00:13:42.545 "zcopy": true, 00:13:42.545 "get_zone_info": false, 00:13:42.545 "zone_management": false, 00:13:42.545 "zone_append": false, 00:13:42.545 "compare": false, 00:13:42.545 "compare_and_write": false, 00:13:42.545 "abort": true, 00:13:42.545 "seek_hole": false, 00:13:42.545 "seek_data": false, 00:13:42.545 "copy": true, 00:13:42.545 "nvme_iov_md": false 00:13:42.545 }, 00:13:42.545 "memory_domains": [ 00:13:42.545 { 00:13:42.545 "dma_device_id": "system", 00:13:42.545 "dma_device_type": 1 00:13:42.545 }, 00:13:42.545 { 00:13:42.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.545 "dma_device_type": 2 00:13:42.545 } 00:13:42.545 ], 00:13:42.545 "driver_specific": {} 00:13:42.545 } 00:13:42.545 ] 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.545 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.804 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.804 "name": "Existed_Raid", 00:13:42.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.804 "strip_size_kb": 0, 00:13:42.804 "state": "configuring", 00:13:42.804 "raid_level": "raid1", 00:13:42.804 "superblock": false, 00:13:42.804 "num_base_bdevs": 3, 00:13:42.804 "num_base_bdevs_discovered": 2, 00:13:42.804 "num_base_bdevs_operational": 3, 00:13:42.804 "base_bdevs_list": [ 00:13:42.804 { 00:13:42.804 "name": "BaseBdev1", 00:13:42.804 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:42.804 "is_configured": true, 00:13:42.804 "data_offset": 0, 00:13:42.804 "data_size": 65536 00:13:42.804 }, 00:13:42.804 { 00:13:42.804 "name": null, 00:13:42.804 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:42.804 "is_configured": false, 00:13:42.804 "data_offset": 0, 00:13:42.804 "data_size": 65536 00:13:42.804 }, 00:13:42.804 { 00:13:42.804 "name": "BaseBdev3", 00:13:42.804 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:42.804 "is_configured": true, 00:13:42.804 "data_offset": 0, 00:13:42.804 "data_size": 65536 00:13:42.804 } 00:13:42.804 ] 00:13:42.804 }' 00:13:42.804 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.804 10:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.372 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:43.372 10:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.372 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:43.372 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:43.631 [2024-07-15 10:21:08.260491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.631 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.890 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.890 "name": "Existed_Raid", 00:13:43.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.890 "strip_size_kb": 0, 00:13:43.890 "state": "configuring", 00:13:43.890 "raid_level": "raid1", 00:13:43.890 "superblock": false, 00:13:43.890 "num_base_bdevs": 3, 00:13:43.890 "num_base_bdevs_discovered": 1, 00:13:43.890 "num_base_bdevs_operational": 3, 00:13:43.890 "base_bdevs_list": [ 00:13:43.890 { 00:13:43.890 "name": "BaseBdev1", 00:13:43.890 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:43.890 "is_configured": true, 00:13:43.890 "data_offset": 0, 00:13:43.890 "data_size": 65536 00:13:43.890 }, 00:13:43.890 { 00:13:43.890 "name": null, 00:13:43.890 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:43.890 "is_configured": false, 00:13:43.890 "data_offset": 0, 00:13:43.890 "data_size": 65536 00:13:43.890 }, 00:13:43.890 { 00:13:43.890 "name": null, 00:13:43.890 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:43.890 "is_configured": false, 00:13:43.890 "data_offset": 0, 00:13:43.890 "data_size": 65536 00:13:43.890 } 00:13:43.890 ] 00:13:43.890 }' 00:13:43.890 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.890 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.149 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.149 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:44.407 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:44.407 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:44.666 [2024-07-15 10:21:09.251071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:44.666 "name": "Existed_Raid", 00:13:44.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.666 "strip_size_kb": 0, 00:13:44.666 "state": "configuring", 00:13:44.666 "raid_level": "raid1", 00:13:44.666 "superblock": false, 00:13:44.666 "num_base_bdevs": 3, 00:13:44.666 "num_base_bdevs_discovered": 2, 00:13:44.666 "num_base_bdevs_operational": 3, 00:13:44.666 "base_bdevs_list": [ 00:13:44.666 { 00:13:44.666 "name": "BaseBdev1", 00:13:44.666 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:44.666 "is_configured": true, 00:13:44.666 "data_offset": 0, 00:13:44.666 "data_size": 65536 00:13:44.666 }, 00:13:44.666 { 00:13:44.666 "name": null, 00:13:44.666 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:44.666 "is_configured": false, 00:13:44.666 "data_offset": 0, 00:13:44.666 "data_size": 65536 00:13:44.666 }, 00:13:44.666 { 00:13:44.666 "name": "BaseBdev3", 00:13:44.666 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:44.666 "is_configured": true, 00:13:44.666 "data_offset": 0, 00:13:44.666 "data_size": 65536 00:13:44.666 } 00:13:44.666 ] 00:13:44.666 }' 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:44.666 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.234 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.234 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:45.494 [2024-07-15 10:21:10.237616] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.494 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.754 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.754 "name": "Existed_Raid", 00:13:45.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.754 "strip_size_kb": 0, 00:13:45.754 "state": "configuring", 00:13:45.754 "raid_level": "raid1", 00:13:45.754 "superblock": false, 00:13:45.754 "num_base_bdevs": 3, 00:13:45.754 "num_base_bdevs_discovered": 1, 00:13:45.754 "num_base_bdevs_operational": 3, 00:13:45.754 "base_bdevs_list": [ 00:13:45.754 { 00:13:45.754 "name": null, 00:13:45.754 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:45.754 "is_configured": false, 00:13:45.754 "data_offset": 0, 00:13:45.754 "data_size": 65536 00:13:45.754 }, 00:13:45.754 { 00:13:45.754 "name": null, 00:13:45.754 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:45.754 "is_configured": false, 00:13:45.754 "data_offset": 0, 00:13:45.754 "data_size": 65536 00:13:45.754 }, 00:13:45.754 { 00:13:45.754 "name": "BaseBdev3", 00:13:45.754 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:45.754 "is_configured": true, 00:13:45.754 "data_offset": 0, 00:13:45.754 "data_size": 65536 00:13:45.754 } 00:13:45.754 ] 00:13:45.754 }' 00:13:45.754 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.754 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.324 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.324 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.324 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:46.324 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:46.583 [2024-07-15 10:21:11.201890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.583 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.842 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.842 "name": "Existed_Raid", 00:13:46.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.842 "strip_size_kb": 0, 00:13:46.842 "state": "configuring", 00:13:46.842 "raid_level": "raid1", 00:13:46.842 "superblock": false, 00:13:46.842 "num_base_bdevs": 3, 00:13:46.842 "num_base_bdevs_discovered": 2, 00:13:46.842 "num_base_bdevs_operational": 3, 00:13:46.842 "base_bdevs_list": [ 00:13:46.842 { 00:13:46.842 "name": null, 00:13:46.842 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:46.842 "is_configured": false, 00:13:46.842 "data_offset": 0, 00:13:46.842 "data_size": 65536 00:13:46.842 }, 00:13:46.842 { 00:13:46.842 "name": "BaseBdev2", 00:13:46.842 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:46.842 "is_configured": true, 00:13:46.842 "data_offset": 0, 00:13:46.842 "data_size": 65536 00:13:46.842 }, 00:13:46.842 { 00:13:46.842 "name": "BaseBdev3", 00:13:46.842 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:46.842 "is_configured": true, 00:13:46.842 "data_offset": 0, 00:13:46.842 "data_size": 65536 00:13:46.842 } 00:13:46.842 ] 00:13:46.842 }' 00:13:46.842 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.842 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.102 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.102 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:47.361 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:47.361 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.361 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a268fb79-6a06-4574-a077-4bf9e7a0ee01 00:13:47.620 [2024-07-15 10:21:12.359698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:47.620 [2024-07-15 10:21:12.359725] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf67ea0 00:13:47.620 [2024-07-15 10:21:12.359731] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:47.620 [2024-07-15 10:21:12.359856] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x110ca10 00:13:47.620 [2024-07-15 10:21:12.359942] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf67ea0 00:13:47.620 [2024-07-15 10:21:12.359949] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf67ea0 00:13:47.620 [2024-07-15 10:21:12.360062] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.620 NewBaseBdev 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.620 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.879 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:48.139 [ 00:13:48.139 { 00:13:48.139 "name": "NewBaseBdev", 00:13:48.139 "aliases": [ 00:13:48.139 "a268fb79-6a06-4574-a077-4bf9e7a0ee01" 00:13:48.139 ], 00:13:48.139 "product_name": "Malloc disk", 00:13:48.139 "block_size": 512, 00:13:48.139 "num_blocks": 65536, 00:13:48.139 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:48.139 "assigned_rate_limits": { 00:13:48.139 "rw_ios_per_sec": 0, 00:13:48.139 "rw_mbytes_per_sec": 0, 00:13:48.139 "r_mbytes_per_sec": 0, 00:13:48.139 "w_mbytes_per_sec": 0 00:13:48.139 }, 00:13:48.139 "claimed": true, 00:13:48.139 "claim_type": "exclusive_write", 00:13:48.139 "zoned": false, 00:13:48.139 "supported_io_types": { 00:13:48.139 "read": true, 00:13:48.139 "write": true, 00:13:48.139 "unmap": true, 00:13:48.139 "flush": true, 00:13:48.139 "reset": true, 00:13:48.139 "nvme_admin": false, 00:13:48.139 "nvme_io": false, 00:13:48.139 "nvme_io_md": false, 00:13:48.139 "write_zeroes": true, 00:13:48.139 "zcopy": true, 00:13:48.139 "get_zone_info": false, 00:13:48.139 "zone_management": false, 00:13:48.139 "zone_append": false, 00:13:48.139 "compare": false, 00:13:48.139 "compare_and_write": false, 00:13:48.139 "abort": true, 00:13:48.139 "seek_hole": false, 00:13:48.139 "seek_data": false, 00:13:48.139 "copy": true, 00:13:48.139 "nvme_iov_md": false 00:13:48.139 }, 00:13:48.139 "memory_domains": [ 00:13:48.139 { 00:13:48.139 "dma_device_id": "system", 00:13:48.139 "dma_device_type": 1 00:13:48.139 }, 00:13:48.139 { 00:13:48.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.139 "dma_device_type": 2 00:13:48.139 } 00:13:48.139 ], 00:13:48.139 "driver_specific": {} 00:13:48.139 } 00:13:48.139 ] 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.139 "name": "Existed_Raid", 00:13:48.139 "uuid": "270c82a5-98eb-42a5-b15e-cf0de75790a1", 00:13:48.139 "strip_size_kb": 0, 00:13:48.139 "state": "online", 00:13:48.139 "raid_level": "raid1", 00:13:48.139 "superblock": false, 00:13:48.139 "num_base_bdevs": 3, 00:13:48.139 "num_base_bdevs_discovered": 3, 00:13:48.139 "num_base_bdevs_operational": 3, 00:13:48.139 "base_bdevs_list": [ 00:13:48.139 { 00:13:48.139 "name": "NewBaseBdev", 00:13:48.139 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:48.139 "is_configured": true, 00:13:48.139 "data_offset": 0, 00:13:48.139 "data_size": 65536 00:13:48.139 }, 00:13:48.139 { 00:13:48.139 "name": "BaseBdev2", 00:13:48.139 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:48.139 "is_configured": true, 00:13:48.139 "data_offset": 0, 00:13:48.139 "data_size": 65536 00:13:48.139 }, 00:13:48.139 { 00:13:48.139 "name": "BaseBdev3", 00:13:48.139 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:48.139 "is_configured": true, 00:13:48.139 "data_offset": 0, 00:13:48.139 "data_size": 65536 00:13:48.139 } 00:13:48.139 ] 00:13:48.139 }' 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.139 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:48.708 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:48.967 [2024-07-15 10:21:13.534925] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.967 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:48.967 "name": "Existed_Raid", 00:13:48.967 "aliases": [ 00:13:48.967 "270c82a5-98eb-42a5-b15e-cf0de75790a1" 00:13:48.967 ], 00:13:48.967 "product_name": "Raid Volume", 00:13:48.967 "block_size": 512, 00:13:48.967 "num_blocks": 65536, 00:13:48.967 "uuid": "270c82a5-98eb-42a5-b15e-cf0de75790a1", 00:13:48.967 "assigned_rate_limits": { 00:13:48.967 "rw_ios_per_sec": 0, 00:13:48.967 "rw_mbytes_per_sec": 0, 00:13:48.967 "r_mbytes_per_sec": 0, 00:13:48.967 "w_mbytes_per_sec": 0 00:13:48.967 }, 00:13:48.967 "claimed": false, 00:13:48.967 "zoned": false, 00:13:48.967 "supported_io_types": { 00:13:48.967 "read": true, 00:13:48.967 "write": true, 00:13:48.967 "unmap": false, 00:13:48.967 "flush": false, 00:13:48.967 "reset": true, 00:13:48.967 "nvme_admin": false, 00:13:48.967 "nvme_io": false, 00:13:48.967 "nvme_io_md": false, 00:13:48.967 "write_zeroes": true, 00:13:48.967 "zcopy": false, 00:13:48.967 "get_zone_info": false, 00:13:48.967 "zone_management": false, 00:13:48.967 "zone_append": false, 00:13:48.967 "compare": false, 00:13:48.967 "compare_and_write": false, 00:13:48.967 "abort": false, 00:13:48.967 "seek_hole": false, 00:13:48.967 "seek_data": false, 00:13:48.967 "copy": false, 00:13:48.967 "nvme_iov_md": false 00:13:48.967 }, 00:13:48.967 "memory_domains": [ 00:13:48.967 { 00:13:48.967 "dma_device_id": "system", 00:13:48.967 "dma_device_type": 1 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.968 "dma_device_type": 2 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "dma_device_id": "system", 00:13:48.968 "dma_device_type": 1 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.968 "dma_device_type": 2 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "dma_device_id": "system", 00:13:48.968 "dma_device_type": 1 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.968 "dma_device_type": 2 00:13:48.968 } 00:13:48.968 ], 00:13:48.968 "driver_specific": { 00:13:48.968 "raid": { 00:13:48.968 "uuid": "270c82a5-98eb-42a5-b15e-cf0de75790a1", 00:13:48.968 "strip_size_kb": 0, 00:13:48.968 "state": "online", 00:13:48.968 "raid_level": "raid1", 00:13:48.968 "superblock": false, 00:13:48.968 "num_base_bdevs": 3, 00:13:48.968 "num_base_bdevs_discovered": 3, 00:13:48.968 "num_base_bdevs_operational": 3, 00:13:48.968 "base_bdevs_list": [ 00:13:48.968 { 00:13:48.968 "name": "NewBaseBdev", 00:13:48.968 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:48.968 "is_configured": true, 00:13:48.968 "data_offset": 0, 00:13:48.968 "data_size": 65536 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "name": "BaseBdev2", 00:13:48.968 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:48.968 "is_configured": true, 00:13:48.968 "data_offset": 0, 00:13:48.968 "data_size": 65536 00:13:48.968 }, 00:13:48.968 { 00:13:48.968 "name": "BaseBdev3", 00:13:48.968 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:48.968 "is_configured": true, 00:13:48.968 "data_offset": 0, 00:13:48.968 "data_size": 65536 00:13:48.968 } 00:13:48.968 ] 00:13:48.968 } 00:13:48.968 } 00:13:48.968 }' 00:13:48.968 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.968 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:48.968 BaseBdev2 00:13:48.968 BaseBdev3' 00:13:48.968 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:48.968 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:48.968 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.227 "name": "NewBaseBdev", 00:13:49.227 "aliases": [ 00:13:49.227 "a268fb79-6a06-4574-a077-4bf9e7a0ee01" 00:13:49.227 ], 00:13:49.227 "product_name": "Malloc disk", 00:13:49.227 "block_size": 512, 00:13:49.227 "num_blocks": 65536, 00:13:49.227 "uuid": "a268fb79-6a06-4574-a077-4bf9e7a0ee01", 00:13:49.227 "assigned_rate_limits": { 00:13:49.227 "rw_ios_per_sec": 0, 00:13:49.227 "rw_mbytes_per_sec": 0, 00:13:49.227 "r_mbytes_per_sec": 0, 00:13:49.227 "w_mbytes_per_sec": 0 00:13:49.227 }, 00:13:49.227 "claimed": true, 00:13:49.227 "claim_type": "exclusive_write", 00:13:49.227 "zoned": false, 00:13:49.227 "supported_io_types": { 00:13:49.227 "read": true, 00:13:49.227 "write": true, 00:13:49.227 "unmap": true, 00:13:49.227 "flush": true, 00:13:49.227 "reset": true, 00:13:49.227 "nvme_admin": false, 00:13:49.227 "nvme_io": false, 00:13:49.227 "nvme_io_md": false, 00:13:49.227 "write_zeroes": true, 00:13:49.227 "zcopy": true, 00:13:49.227 "get_zone_info": false, 00:13:49.227 "zone_management": false, 00:13:49.227 "zone_append": false, 00:13:49.227 "compare": false, 00:13:49.227 "compare_and_write": false, 00:13:49.227 "abort": true, 00:13:49.227 "seek_hole": false, 00:13:49.227 "seek_data": false, 00:13:49.227 "copy": true, 00:13:49.227 "nvme_iov_md": false 00:13:49.227 }, 00:13:49.227 "memory_domains": [ 00:13:49.227 { 00:13:49.227 "dma_device_id": "system", 00:13:49.227 "dma_device_type": 1 00:13:49.227 }, 00:13:49.227 { 00:13:49.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.227 "dma_device_type": 2 00:13:49.227 } 00:13:49.227 ], 00:13:49.227 "driver_specific": {} 00:13:49.227 }' 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.227 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.227 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.227 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.486 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.486 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.486 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.486 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:49.486 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.745 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.745 "name": "BaseBdev2", 00:13:49.745 "aliases": [ 00:13:49.745 "6b1adac6-c6e2-467f-81a3-9de6baf44a97" 00:13:49.745 ], 00:13:49.745 "product_name": "Malloc disk", 00:13:49.745 "block_size": 512, 00:13:49.745 "num_blocks": 65536, 00:13:49.745 "uuid": "6b1adac6-c6e2-467f-81a3-9de6baf44a97", 00:13:49.745 "assigned_rate_limits": { 00:13:49.745 "rw_ios_per_sec": 0, 00:13:49.745 "rw_mbytes_per_sec": 0, 00:13:49.745 "r_mbytes_per_sec": 0, 00:13:49.745 "w_mbytes_per_sec": 0 00:13:49.745 }, 00:13:49.745 "claimed": true, 00:13:49.745 "claim_type": "exclusive_write", 00:13:49.745 "zoned": false, 00:13:49.745 "supported_io_types": { 00:13:49.746 "read": true, 00:13:49.746 "write": true, 00:13:49.746 "unmap": true, 00:13:49.746 "flush": true, 00:13:49.746 "reset": true, 00:13:49.746 "nvme_admin": false, 00:13:49.746 "nvme_io": false, 00:13:49.746 "nvme_io_md": false, 00:13:49.746 "write_zeroes": true, 00:13:49.746 "zcopy": true, 00:13:49.746 "get_zone_info": false, 00:13:49.746 "zone_management": false, 00:13:49.746 "zone_append": false, 00:13:49.746 "compare": false, 00:13:49.746 "compare_and_write": false, 00:13:49.746 "abort": true, 00:13:49.746 "seek_hole": false, 00:13:49.746 "seek_data": false, 00:13:49.746 "copy": true, 00:13:49.746 "nvme_iov_md": false 00:13:49.746 }, 00:13:49.746 "memory_domains": [ 00:13:49.746 { 00:13:49.746 "dma_device_id": "system", 00:13:49.746 "dma_device_type": 1 00:13:49.746 }, 00:13:49.746 { 00:13:49.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.746 "dma_device_type": 2 00:13:49.746 } 00:13:49.746 ], 00:13:49.746 "driver_specific": {} 00:13:49.746 }' 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.746 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.004 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.004 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.004 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.004 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:50.004 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:50.004 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:50.004 "name": "BaseBdev3", 00:13:50.004 "aliases": [ 00:13:50.004 "997d11c4-dab1-488a-bba8-3fd3af9e6c5e" 00:13:50.004 ], 00:13:50.004 "product_name": "Malloc disk", 00:13:50.004 "block_size": 512, 00:13:50.004 "num_blocks": 65536, 00:13:50.004 "uuid": "997d11c4-dab1-488a-bba8-3fd3af9e6c5e", 00:13:50.004 "assigned_rate_limits": { 00:13:50.004 "rw_ios_per_sec": 0, 00:13:50.004 "rw_mbytes_per_sec": 0, 00:13:50.004 "r_mbytes_per_sec": 0, 00:13:50.004 "w_mbytes_per_sec": 0 00:13:50.004 }, 00:13:50.004 "claimed": true, 00:13:50.004 "claim_type": "exclusive_write", 00:13:50.004 "zoned": false, 00:13:50.004 "supported_io_types": { 00:13:50.004 "read": true, 00:13:50.004 "write": true, 00:13:50.004 "unmap": true, 00:13:50.004 "flush": true, 00:13:50.004 "reset": true, 00:13:50.004 "nvme_admin": false, 00:13:50.004 "nvme_io": false, 00:13:50.004 "nvme_io_md": false, 00:13:50.004 "write_zeroes": true, 00:13:50.004 "zcopy": true, 00:13:50.005 "get_zone_info": false, 00:13:50.005 "zone_management": false, 00:13:50.005 "zone_append": false, 00:13:50.005 "compare": false, 00:13:50.005 "compare_and_write": false, 00:13:50.005 "abort": true, 00:13:50.005 "seek_hole": false, 00:13:50.005 "seek_data": false, 00:13:50.005 "copy": true, 00:13:50.005 "nvme_iov_md": false 00:13:50.005 }, 00:13:50.005 "memory_domains": [ 00:13:50.005 { 00:13:50.005 "dma_device_id": "system", 00:13:50.005 "dma_device_type": 1 00:13:50.005 }, 00:13:50.005 { 00:13:50.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.005 "dma_device_type": 2 00:13:50.005 } 00:13:50.005 ], 00:13:50.005 "driver_specific": {} 00:13:50.005 }' 00:13:50.005 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.263 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.263 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.263 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.263 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:50.522 [2024-07-15 10:21:15.239229] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.522 [2024-07-15 10:21:15.239248] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.522 [2024-07-15 10:21:15.239281] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.522 [2024-07-15 10:21:15.239453] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.522 [2024-07-15 10:21:15.239461] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf67ea0 name Existed_Raid, state offline 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1789416 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1789416 ']' 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1789416 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1789416 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1789416' 00:13:50.522 killing process with pid 1789416 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1789416 00:13:50.522 [2024-07-15 10:21:15.305453] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.522 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1789416 00:13:50.781 [2024-07-15 10:21:15.327819] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.781 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:50.781 00:13:50.781 real 0m21.455s 00:13:50.781 user 0m39.204s 00:13:50.781 sys 0m4.078s 00:13:50.781 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.781 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.781 ************************************ 00:13:50.781 END TEST raid_state_function_test 00:13:50.781 ************************************ 00:13:50.781 10:21:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:50.781 10:21:15 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:50.781 10:21:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:50.781 10:21:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.781 10:21:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 ************************************ 00:13:51.041 START TEST raid_state_function_test_sb 00:13:51.041 ************************************ 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1794267 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1794267' 00:13:51.041 Process raid pid: 1794267 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1794267 /var/tmp/spdk-raid.sock 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1794267 ']' 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.041 10:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 [2024-07-15 10:21:15.623772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:51.041 [2024-07-15 10:21:15.623816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:51.041 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:51.041 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:51.041 [2024-07-15 10:21:15.715519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.041 [2024-07-15 10:21:15.789153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.300 [2024-07-15 10:21:15.850759] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.300 [2024-07-15 10:21:15.850783] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:51.869 [2024-07-15 10:21:16.581702] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.869 [2024-07-15 10:21:16.581736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.869 [2024-07-15 10:21:16.581746] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.869 [2024-07-15 10:21:16.581755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.869 [2024-07-15 10:21:16.581762] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.869 [2024-07-15 10:21:16.581772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.869 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.128 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.128 "name": "Existed_Raid", 00:13:52.128 "uuid": "f61fe48f-1474-4da9-bbe3-f0bdf1801df3", 00:13:52.128 "strip_size_kb": 0, 00:13:52.128 "state": "configuring", 00:13:52.128 "raid_level": "raid1", 00:13:52.128 "superblock": true, 00:13:52.128 "num_base_bdevs": 3, 00:13:52.128 "num_base_bdevs_discovered": 0, 00:13:52.128 "num_base_bdevs_operational": 3, 00:13:52.128 "base_bdevs_list": [ 00:13:52.128 { 00:13:52.128 "name": "BaseBdev1", 00:13:52.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.128 "is_configured": false, 00:13:52.128 "data_offset": 0, 00:13:52.128 "data_size": 0 00:13:52.128 }, 00:13:52.128 { 00:13:52.128 "name": "BaseBdev2", 00:13:52.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.128 "is_configured": false, 00:13:52.128 "data_offset": 0, 00:13:52.128 "data_size": 0 00:13:52.128 }, 00:13:52.128 { 00:13:52.128 "name": "BaseBdev3", 00:13:52.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.128 "is_configured": false, 00:13:52.128 "data_offset": 0, 00:13:52.128 "data_size": 0 00:13:52.128 } 00:13:52.128 ] 00:13:52.128 }' 00:13:52.128 10:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.128 10:21:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.701 10:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:52.701 [2024-07-15 10:21:17.395698] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.701 [2024-07-15 10:21:17.395722] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24fef40 name Existed_Raid, state configuring 00:13:52.701 10:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:52.996 [2024-07-15 10:21:17.560143] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.996 [2024-07-15 10:21:17.560162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.996 [2024-07-15 10:21:17.560171] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.996 [2024-07-15 10:21:17.560181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.996 [2024-07-15 10:21:17.560188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.996 [2024-07-15 10:21:17.560198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:52.996 [2024-07-15 10:21:17.732950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.996 BaseBdev1 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:52.996 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:53.254 10:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.514 [ 00:13:53.514 { 00:13:53.514 "name": "BaseBdev1", 00:13:53.514 "aliases": [ 00:13:53.514 "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf" 00:13:53.514 ], 00:13:53.514 "product_name": "Malloc disk", 00:13:53.514 "block_size": 512, 00:13:53.514 "num_blocks": 65536, 00:13:53.514 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:53.514 "assigned_rate_limits": { 00:13:53.514 "rw_ios_per_sec": 0, 00:13:53.514 "rw_mbytes_per_sec": 0, 00:13:53.514 "r_mbytes_per_sec": 0, 00:13:53.514 "w_mbytes_per_sec": 0 00:13:53.514 }, 00:13:53.514 "claimed": true, 00:13:53.514 "claim_type": "exclusive_write", 00:13:53.514 "zoned": false, 00:13:53.514 "supported_io_types": { 00:13:53.514 "read": true, 00:13:53.514 "write": true, 00:13:53.514 "unmap": true, 00:13:53.514 "flush": true, 00:13:53.514 "reset": true, 00:13:53.514 "nvme_admin": false, 00:13:53.514 "nvme_io": false, 00:13:53.514 "nvme_io_md": false, 00:13:53.514 "write_zeroes": true, 00:13:53.514 "zcopy": true, 00:13:53.514 "get_zone_info": false, 00:13:53.514 "zone_management": false, 00:13:53.514 "zone_append": false, 00:13:53.514 "compare": false, 00:13:53.514 "compare_and_write": false, 00:13:53.514 "abort": true, 00:13:53.514 "seek_hole": false, 00:13:53.514 "seek_data": false, 00:13:53.514 "copy": true, 00:13:53.514 "nvme_iov_md": false 00:13:53.514 }, 00:13:53.514 "memory_domains": [ 00:13:53.514 { 00:13:53.514 "dma_device_id": "system", 00:13:53.514 "dma_device_type": 1 00:13:53.514 }, 00:13:53.514 { 00:13:53.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.514 "dma_device_type": 2 00:13:53.514 } 00:13:53.514 ], 00:13:53.514 "driver_specific": {} 00:13:53.514 } 00:13:53.514 ] 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.514 "name": "Existed_Raid", 00:13:53.514 "uuid": "bc477c28-e14d-4c53-8c6e-8c1959695a6f", 00:13:53.514 "strip_size_kb": 0, 00:13:53.514 "state": "configuring", 00:13:53.514 "raid_level": "raid1", 00:13:53.514 "superblock": true, 00:13:53.514 "num_base_bdevs": 3, 00:13:53.514 "num_base_bdevs_discovered": 1, 00:13:53.514 "num_base_bdevs_operational": 3, 00:13:53.514 "base_bdevs_list": [ 00:13:53.514 { 00:13:53.514 "name": "BaseBdev1", 00:13:53.514 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:53.514 "is_configured": true, 00:13:53.514 "data_offset": 2048, 00:13:53.514 "data_size": 63488 00:13:53.514 }, 00:13:53.514 { 00:13:53.514 "name": "BaseBdev2", 00:13:53.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.514 "is_configured": false, 00:13:53.514 "data_offset": 0, 00:13:53.514 "data_size": 0 00:13:53.514 }, 00:13:53.514 { 00:13:53.514 "name": "BaseBdev3", 00:13:53.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.514 "is_configured": false, 00:13:53.514 "data_offset": 0, 00:13:53.514 "data_size": 0 00:13:53.514 } 00:13:53.514 ] 00:13:53.514 }' 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.514 10:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.083 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:54.342 [2024-07-15 10:21:18.895947] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:54.342 [2024-07-15 10:21:18.895973] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24fe810 name Existed_Raid, state configuring 00:13:54.342 10:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:54.342 [2024-07-15 10:21:19.060378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.342 [2024-07-15 10:21:19.061373] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.342 [2024-07-15 10:21:19.061398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.342 [2024-07-15 10:21:19.061407] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.342 [2024-07-15 10:21:19.061417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.342 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.601 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.601 "name": "Existed_Raid", 00:13:54.601 "uuid": "f946b357-335f-4638-8fc5-03bc5d32a1b7", 00:13:54.601 "strip_size_kb": 0, 00:13:54.601 "state": "configuring", 00:13:54.601 "raid_level": "raid1", 00:13:54.601 "superblock": true, 00:13:54.601 "num_base_bdevs": 3, 00:13:54.601 "num_base_bdevs_discovered": 1, 00:13:54.601 "num_base_bdevs_operational": 3, 00:13:54.601 "base_bdevs_list": [ 00:13:54.601 { 00:13:54.601 "name": "BaseBdev1", 00:13:54.601 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:54.601 "is_configured": true, 00:13:54.601 "data_offset": 2048, 00:13:54.601 "data_size": 63488 00:13:54.601 }, 00:13:54.601 { 00:13:54.601 "name": "BaseBdev2", 00:13:54.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.601 "is_configured": false, 00:13:54.601 "data_offset": 0, 00:13:54.601 "data_size": 0 00:13:54.601 }, 00:13:54.601 { 00:13:54.601 "name": "BaseBdev3", 00:13:54.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.601 "is_configured": false, 00:13:54.601 "data_offset": 0, 00:13:54.601 "data_size": 0 00:13:54.601 } 00:13:54.601 ] 00:13:54.601 }' 00:13:54.601 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.601 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.169 [2024-07-15 10:21:19.873283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.169 BaseBdev2 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:55.169 10:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:55.427 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.427 [ 00:13:55.427 { 00:13:55.427 "name": "BaseBdev2", 00:13:55.427 "aliases": [ 00:13:55.427 "cff634e6-dbf6-4b8c-9b18-3cd58482f87b" 00:13:55.427 ], 00:13:55.427 "product_name": "Malloc disk", 00:13:55.427 "block_size": 512, 00:13:55.427 "num_blocks": 65536, 00:13:55.427 "uuid": "cff634e6-dbf6-4b8c-9b18-3cd58482f87b", 00:13:55.427 "assigned_rate_limits": { 00:13:55.427 "rw_ios_per_sec": 0, 00:13:55.427 "rw_mbytes_per_sec": 0, 00:13:55.428 "r_mbytes_per_sec": 0, 00:13:55.428 "w_mbytes_per_sec": 0 00:13:55.428 }, 00:13:55.428 "claimed": true, 00:13:55.428 "claim_type": "exclusive_write", 00:13:55.428 "zoned": false, 00:13:55.428 "supported_io_types": { 00:13:55.428 "read": true, 00:13:55.428 "write": true, 00:13:55.428 "unmap": true, 00:13:55.428 "flush": true, 00:13:55.428 "reset": true, 00:13:55.428 "nvme_admin": false, 00:13:55.428 "nvme_io": false, 00:13:55.428 "nvme_io_md": false, 00:13:55.428 "write_zeroes": true, 00:13:55.428 "zcopy": true, 00:13:55.428 "get_zone_info": false, 00:13:55.428 "zone_management": false, 00:13:55.428 "zone_append": false, 00:13:55.428 "compare": false, 00:13:55.428 "compare_and_write": false, 00:13:55.428 "abort": true, 00:13:55.428 "seek_hole": false, 00:13:55.428 "seek_data": false, 00:13:55.428 "copy": true, 00:13:55.428 "nvme_iov_md": false 00:13:55.428 }, 00:13:55.428 "memory_domains": [ 00:13:55.428 { 00:13:55.428 "dma_device_id": "system", 00:13:55.428 "dma_device_type": 1 00:13:55.428 }, 00:13:55.428 { 00:13:55.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.428 "dma_device_type": 2 00:13:55.428 } 00:13:55.428 ], 00:13:55.428 "driver_specific": {} 00:13:55.428 } 00:13:55.428 ] 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.428 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.687 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.687 "name": "Existed_Raid", 00:13:55.687 "uuid": "f946b357-335f-4638-8fc5-03bc5d32a1b7", 00:13:55.687 "strip_size_kb": 0, 00:13:55.687 "state": "configuring", 00:13:55.687 "raid_level": "raid1", 00:13:55.687 "superblock": true, 00:13:55.687 "num_base_bdevs": 3, 00:13:55.687 "num_base_bdevs_discovered": 2, 00:13:55.687 "num_base_bdevs_operational": 3, 00:13:55.687 "base_bdevs_list": [ 00:13:55.687 { 00:13:55.687 "name": "BaseBdev1", 00:13:55.687 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:55.687 "is_configured": true, 00:13:55.687 "data_offset": 2048, 00:13:55.687 "data_size": 63488 00:13:55.687 }, 00:13:55.687 { 00:13:55.687 "name": "BaseBdev2", 00:13:55.687 "uuid": "cff634e6-dbf6-4b8c-9b18-3cd58482f87b", 00:13:55.687 "is_configured": true, 00:13:55.687 "data_offset": 2048, 00:13:55.687 "data_size": 63488 00:13:55.687 }, 00:13:55.687 { 00:13:55.687 "name": "BaseBdev3", 00:13:55.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.687 "is_configured": false, 00:13:55.687 "data_offset": 0, 00:13:55.687 "data_size": 0 00:13:55.687 } 00:13:55.687 ] 00:13:55.687 }' 00:13:55.687 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.687 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.254 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.254 [2024-07-15 10:21:21.042995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.254 [2024-07-15 10:21:21.043109] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x24ff700 00:13:56.254 [2024-07-15 10:21:21.043119] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:56.254 [2024-07-15 10:21:21.043235] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24ff3d0 00:13:56.254 [2024-07-15 10:21:21.043328] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x24ff700 00:13:56.254 [2024-07-15 10:21:21.043335] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x24ff700 00:13:56.254 [2024-07-15 10:21:21.043402] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.513 BaseBdev3 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.513 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.772 [ 00:13:56.772 { 00:13:56.772 "name": "BaseBdev3", 00:13:56.772 "aliases": [ 00:13:56.772 "d134e23c-713b-4adf-b5b5-547f953c7ee4" 00:13:56.772 ], 00:13:56.772 "product_name": "Malloc disk", 00:13:56.772 "block_size": 512, 00:13:56.772 "num_blocks": 65536, 00:13:56.772 "uuid": "d134e23c-713b-4adf-b5b5-547f953c7ee4", 00:13:56.772 "assigned_rate_limits": { 00:13:56.772 "rw_ios_per_sec": 0, 00:13:56.772 "rw_mbytes_per_sec": 0, 00:13:56.772 "r_mbytes_per_sec": 0, 00:13:56.772 "w_mbytes_per_sec": 0 00:13:56.772 }, 00:13:56.772 "claimed": true, 00:13:56.772 "claim_type": "exclusive_write", 00:13:56.772 "zoned": false, 00:13:56.772 "supported_io_types": { 00:13:56.772 "read": true, 00:13:56.772 "write": true, 00:13:56.772 "unmap": true, 00:13:56.772 "flush": true, 00:13:56.772 "reset": true, 00:13:56.772 "nvme_admin": false, 00:13:56.772 "nvme_io": false, 00:13:56.772 "nvme_io_md": false, 00:13:56.772 "write_zeroes": true, 00:13:56.772 "zcopy": true, 00:13:56.772 "get_zone_info": false, 00:13:56.772 "zone_management": false, 00:13:56.772 "zone_append": false, 00:13:56.772 "compare": false, 00:13:56.772 "compare_and_write": false, 00:13:56.772 "abort": true, 00:13:56.772 "seek_hole": false, 00:13:56.772 "seek_data": false, 00:13:56.772 "copy": true, 00:13:56.772 "nvme_iov_md": false 00:13:56.772 }, 00:13:56.772 "memory_domains": [ 00:13:56.772 { 00:13:56.772 "dma_device_id": "system", 00:13:56.772 "dma_device_type": 1 00:13:56.772 }, 00:13:56.772 { 00:13:56.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.772 "dma_device_type": 2 00:13:56.772 } 00:13:56.772 ], 00:13:56.772 "driver_specific": {} 00:13:56.772 } 00:13:56.772 ] 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.772 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.030 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.030 "name": "Existed_Raid", 00:13:57.030 "uuid": "f946b357-335f-4638-8fc5-03bc5d32a1b7", 00:13:57.030 "strip_size_kb": 0, 00:13:57.030 "state": "online", 00:13:57.030 "raid_level": "raid1", 00:13:57.030 "superblock": true, 00:13:57.030 "num_base_bdevs": 3, 00:13:57.030 "num_base_bdevs_discovered": 3, 00:13:57.030 "num_base_bdevs_operational": 3, 00:13:57.030 "base_bdevs_list": [ 00:13:57.030 { 00:13:57.030 "name": "BaseBdev1", 00:13:57.030 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:57.030 "is_configured": true, 00:13:57.030 "data_offset": 2048, 00:13:57.030 "data_size": 63488 00:13:57.030 }, 00:13:57.030 { 00:13:57.030 "name": "BaseBdev2", 00:13:57.030 "uuid": "cff634e6-dbf6-4b8c-9b18-3cd58482f87b", 00:13:57.030 "is_configured": true, 00:13:57.030 "data_offset": 2048, 00:13:57.030 "data_size": 63488 00:13:57.030 }, 00:13:57.030 { 00:13:57.030 "name": "BaseBdev3", 00:13:57.030 "uuid": "d134e23c-713b-4adf-b5b5-547f953c7ee4", 00:13:57.030 "is_configured": true, 00:13:57.030 "data_offset": 2048, 00:13:57.030 "data_size": 63488 00:13:57.030 } 00:13:57.030 ] 00:13:57.030 }' 00:13:57.030 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.030 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:57.315 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:57.574 [2024-07-15 10:21:22.182112] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.574 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:57.574 "name": "Existed_Raid", 00:13:57.574 "aliases": [ 00:13:57.574 "f946b357-335f-4638-8fc5-03bc5d32a1b7" 00:13:57.574 ], 00:13:57.574 "product_name": "Raid Volume", 00:13:57.574 "block_size": 512, 00:13:57.574 "num_blocks": 63488, 00:13:57.574 "uuid": "f946b357-335f-4638-8fc5-03bc5d32a1b7", 00:13:57.574 "assigned_rate_limits": { 00:13:57.574 "rw_ios_per_sec": 0, 00:13:57.574 "rw_mbytes_per_sec": 0, 00:13:57.574 "r_mbytes_per_sec": 0, 00:13:57.574 "w_mbytes_per_sec": 0 00:13:57.574 }, 00:13:57.574 "claimed": false, 00:13:57.574 "zoned": false, 00:13:57.574 "supported_io_types": { 00:13:57.574 "read": true, 00:13:57.574 "write": true, 00:13:57.574 "unmap": false, 00:13:57.574 "flush": false, 00:13:57.574 "reset": true, 00:13:57.574 "nvme_admin": false, 00:13:57.574 "nvme_io": false, 00:13:57.574 "nvme_io_md": false, 00:13:57.574 "write_zeroes": true, 00:13:57.574 "zcopy": false, 00:13:57.574 "get_zone_info": false, 00:13:57.574 "zone_management": false, 00:13:57.574 "zone_append": false, 00:13:57.574 "compare": false, 00:13:57.574 "compare_and_write": false, 00:13:57.574 "abort": false, 00:13:57.574 "seek_hole": false, 00:13:57.574 "seek_data": false, 00:13:57.574 "copy": false, 00:13:57.574 "nvme_iov_md": false 00:13:57.574 }, 00:13:57.574 "memory_domains": [ 00:13:57.574 { 00:13:57.574 "dma_device_id": "system", 00:13:57.574 "dma_device_type": 1 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.574 "dma_device_type": 2 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "dma_device_id": "system", 00:13:57.574 "dma_device_type": 1 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.574 "dma_device_type": 2 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "dma_device_id": "system", 00:13:57.574 "dma_device_type": 1 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.574 "dma_device_type": 2 00:13:57.574 } 00:13:57.574 ], 00:13:57.574 "driver_specific": { 00:13:57.574 "raid": { 00:13:57.574 "uuid": "f946b357-335f-4638-8fc5-03bc5d32a1b7", 00:13:57.574 "strip_size_kb": 0, 00:13:57.574 "state": "online", 00:13:57.574 "raid_level": "raid1", 00:13:57.574 "superblock": true, 00:13:57.574 "num_base_bdevs": 3, 00:13:57.574 "num_base_bdevs_discovered": 3, 00:13:57.574 "num_base_bdevs_operational": 3, 00:13:57.574 "base_bdevs_list": [ 00:13:57.574 { 00:13:57.574 "name": "BaseBdev1", 00:13:57.574 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:57.574 "is_configured": true, 00:13:57.574 "data_offset": 2048, 00:13:57.574 "data_size": 63488 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "name": "BaseBdev2", 00:13:57.574 "uuid": "cff634e6-dbf6-4b8c-9b18-3cd58482f87b", 00:13:57.574 "is_configured": true, 00:13:57.574 "data_offset": 2048, 00:13:57.574 "data_size": 63488 00:13:57.574 }, 00:13:57.574 { 00:13:57.574 "name": "BaseBdev3", 00:13:57.574 "uuid": "d134e23c-713b-4adf-b5b5-547f953c7ee4", 00:13:57.574 "is_configured": true, 00:13:57.574 "data_offset": 2048, 00:13:57.574 "data_size": 63488 00:13:57.574 } 00:13:57.574 ] 00:13:57.574 } 00:13:57.574 } 00:13:57.574 }' 00:13:57.574 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:57.574 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:57.574 BaseBdev2 00:13:57.574 BaseBdev3' 00:13:57.574 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:57.574 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:57.574 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:57.832 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:57.832 "name": "BaseBdev1", 00:13:57.832 "aliases": [ 00:13:57.832 "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf" 00:13:57.832 ], 00:13:57.832 "product_name": "Malloc disk", 00:13:57.832 "block_size": 512, 00:13:57.832 "num_blocks": 65536, 00:13:57.833 "uuid": "ecd6c3c8-2f62-4c7f-8e1e-eb8165f2e4bf", 00:13:57.833 "assigned_rate_limits": { 00:13:57.833 "rw_ios_per_sec": 0, 00:13:57.833 "rw_mbytes_per_sec": 0, 00:13:57.833 "r_mbytes_per_sec": 0, 00:13:57.833 "w_mbytes_per_sec": 0 00:13:57.833 }, 00:13:57.833 "claimed": true, 00:13:57.833 "claim_type": "exclusive_write", 00:13:57.833 "zoned": false, 00:13:57.833 "supported_io_types": { 00:13:57.833 "read": true, 00:13:57.833 "write": true, 00:13:57.833 "unmap": true, 00:13:57.833 "flush": true, 00:13:57.833 "reset": true, 00:13:57.833 "nvme_admin": false, 00:13:57.833 "nvme_io": false, 00:13:57.833 "nvme_io_md": false, 00:13:57.833 "write_zeroes": true, 00:13:57.833 "zcopy": true, 00:13:57.833 "get_zone_info": false, 00:13:57.833 "zone_management": false, 00:13:57.833 "zone_append": false, 00:13:57.833 "compare": false, 00:13:57.833 "compare_and_write": false, 00:13:57.833 "abort": true, 00:13:57.833 "seek_hole": false, 00:13:57.833 "seek_data": false, 00:13:57.833 "copy": true, 00:13:57.833 "nvme_iov_md": false 00:13:57.833 }, 00:13:57.833 "memory_domains": [ 00:13:57.833 { 00:13:57.833 "dma_device_id": "system", 00:13:57.833 "dma_device_type": 1 00:13:57.833 }, 00:13:57.833 { 00:13:57.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.833 "dma_device_type": 2 00:13:57.833 } 00:13:57.833 ], 00:13:57.833 "driver_specific": {} 00:13:57.833 }' 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:57.833 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:58.090 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:58.348 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:58.348 "name": "BaseBdev2", 00:13:58.348 "aliases": [ 00:13:58.348 "cff634e6-dbf6-4b8c-9b18-3cd58482f87b" 00:13:58.348 ], 00:13:58.348 "product_name": "Malloc disk", 00:13:58.348 "block_size": 512, 00:13:58.348 "num_blocks": 65536, 00:13:58.348 "uuid": "cff634e6-dbf6-4b8c-9b18-3cd58482f87b", 00:13:58.348 "assigned_rate_limits": { 00:13:58.348 "rw_ios_per_sec": 0, 00:13:58.348 "rw_mbytes_per_sec": 0, 00:13:58.348 "r_mbytes_per_sec": 0, 00:13:58.348 "w_mbytes_per_sec": 0 00:13:58.348 }, 00:13:58.348 "claimed": true, 00:13:58.348 "claim_type": "exclusive_write", 00:13:58.348 "zoned": false, 00:13:58.348 "supported_io_types": { 00:13:58.348 "read": true, 00:13:58.348 "write": true, 00:13:58.348 "unmap": true, 00:13:58.348 "flush": true, 00:13:58.348 "reset": true, 00:13:58.348 "nvme_admin": false, 00:13:58.348 "nvme_io": false, 00:13:58.348 "nvme_io_md": false, 00:13:58.348 "write_zeroes": true, 00:13:58.348 "zcopy": true, 00:13:58.348 "get_zone_info": false, 00:13:58.348 "zone_management": false, 00:13:58.348 "zone_append": false, 00:13:58.348 "compare": false, 00:13:58.348 "compare_and_write": false, 00:13:58.348 "abort": true, 00:13:58.348 "seek_hole": false, 00:13:58.348 "seek_data": false, 00:13:58.348 "copy": true, 00:13:58.348 "nvme_iov_md": false 00:13:58.348 }, 00:13:58.348 "memory_domains": [ 00:13:58.348 { 00:13:58.348 "dma_device_id": "system", 00:13:58.348 "dma_device_type": 1 00:13:58.348 }, 00:13:58.348 { 00:13:58.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.348 "dma_device_type": 2 00:13:58.348 } 00:13:58.348 ], 00:13:58.348 "driver_specific": {} 00:13:58.348 }' 00:13:58.348 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:58.348 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:58.348 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:58.348 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:58.348 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:58.348 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:58.348 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.348 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.348 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.348 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:58.620 "name": "BaseBdev3", 00:13:58.620 "aliases": [ 00:13:58.620 "d134e23c-713b-4adf-b5b5-547f953c7ee4" 00:13:58.620 ], 00:13:58.620 "product_name": "Malloc disk", 00:13:58.620 "block_size": 512, 00:13:58.620 "num_blocks": 65536, 00:13:58.620 "uuid": "d134e23c-713b-4adf-b5b5-547f953c7ee4", 00:13:58.620 "assigned_rate_limits": { 00:13:58.620 "rw_ios_per_sec": 0, 00:13:58.620 "rw_mbytes_per_sec": 0, 00:13:58.620 "r_mbytes_per_sec": 0, 00:13:58.620 "w_mbytes_per_sec": 0 00:13:58.620 }, 00:13:58.620 "claimed": true, 00:13:58.620 "claim_type": "exclusive_write", 00:13:58.620 "zoned": false, 00:13:58.620 "supported_io_types": { 00:13:58.620 "read": true, 00:13:58.620 "write": true, 00:13:58.620 "unmap": true, 00:13:58.620 "flush": true, 00:13:58.620 "reset": true, 00:13:58.620 "nvme_admin": false, 00:13:58.620 "nvme_io": false, 00:13:58.620 "nvme_io_md": false, 00:13:58.620 "write_zeroes": true, 00:13:58.620 "zcopy": true, 00:13:58.620 "get_zone_info": false, 00:13:58.620 "zone_management": false, 00:13:58.620 "zone_append": false, 00:13:58.620 "compare": false, 00:13:58.620 "compare_and_write": false, 00:13:58.620 "abort": true, 00:13:58.620 "seek_hole": false, 00:13:58.620 "seek_data": false, 00:13:58.620 "copy": true, 00:13:58.620 "nvme_iov_md": false 00:13:58.620 }, 00:13:58.620 "memory_domains": [ 00:13:58.620 { 00:13:58.620 "dma_device_id": "system", 00:13:58.620 "dma_device_type": 1 00:13:58.620 }, 00:13:58.620 { 00:13:58.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.620 "dma_device_type": 2 00:13:58.620 } 00:13:58.620 ], 00:13:58.620 "driver_specific": {} 00:13:58.620 }' 00:13:58.620 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:58.884 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:59.179 [2024-07-15 10:21:23.822183] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:59.179 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.180 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.438 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.438 "name": "Existed_Raid", 00:13:59.438 "uuid": "f946b357-335f-4638-8fc5-03bc5d32a1b7", 00:13:59.438 "strip_size_kb": 0, 00:13:59.438 "state": "online", 00:13:59.438 "raid_level": "raid1", 00:13:59.438 "superblock": true, 00:13:59.438 "num_base_bdevs": 3, 00:13:59.438 "num_base_bdevs_discovered": 2, 00:13:59.438 "num_base_bdevs_operational": 2, 00:13:59.438 "base_bdevs_list": [ 00:13:59.438 { 00:13:59.438 "name": null, 00:13:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.438 "is_configured": false, 00:13:59.438 "data_offset": 2048, 00:13:59.438 "data_size": 63488 00:13:59.438 }, 00:13:59.438 { 00:13:59.438 "name": "BaseBdev2", 00:13:59.438 "uuid": "cff634e6-dbf6-4b8c-9b18-3cd58482f87b", 00:13:59.438 "is_configured": true, 00:13:59.438 "data_offset": 2048, 00:13:59.438 "data_size": 63488 00:13:59.438 }, 00:13:59.438 { 00:13:59.438 "name": "BaseBdev3", 00:13:59.438 "uuid": "d134e23c-713b-4adf-b5b5-547f953c7ee4", 00:13:59.438 "is_configured": true, 00:13:59.438 "data_offset": 2048, 00:13:59.438 "data_size": 63488 00:13:59.438 } 00:13:59.438 ] 00:13:59.438 }' 00:13:59.438 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.438 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:00.003 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:00.261 [2024-07-15 10:21:24.841708] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.261 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:00.261 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:00.261 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.261 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:00.261 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:00.261 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:00.261 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:00.519 [2024-07-15 10:21:25.192127] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:00.519 [2024-07-15 10:21:25.192185] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.519 [2024-07-15 10:21:25.202064] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.519 [2024-07-15 10:21:25.202088] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.519 [2024-07-15 10:21:25.202096] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x24ff700 name Existed_Raid, state offline 00:14:00.519 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:00.519 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:00.519 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.519 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:00.778 BaseBdev2 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:00.778 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.036 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.296 [ 00:14:01.296 { 00:14:01.296 "name": "BaseBdev2", 00:14:01.296 "aliases": [ 00:14:01.296 "f535c9e9-16d3-47ef-ae42-b95abda817ee" 00:14:01.296 ], 00:14:01.296 "product_name": "Malloc disk", 00:14:01.296 "block_size": 512, 00:14:01.296 "num_blocks": 65536, 00:14:01.296 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:01.296 "assigned_rate_limits": { 00:14:01.296 "rw_ios_per_sec": 0, 00:14:01.296 "rw_mbytes_per_sec": 0, 00:14:01.296 "r_mbytes_per_sec": 0, 00:14:01.296 "w_mbytes_per_sec": 0 00:14:01.296 }, 00:14:01.296 "claimed": false, 00:14:01.296 "zoned": false, 00:14:01.296 "supported_io_types": { 00:14:01.296 "read": true, 00:14:01.296 "write": true, 00:14:01.296 "unmap": true, 00:14:01.296 "flush": true, 00:14:01.296 "reset": true, 00:14:01.296 "nvme_admin": false, 00:14:01.296 "nvme_io": false, 00:14:01.296 "nvme_io_md": false, 00:14:01.296 "write_zeroes": true, 00:14:01.296 "zcopy": true, 00:14:01.296 "get_zone_info": false, 00:14:01.296 "zone_management": false, 00:14:01.296 "zone_append": false, 00:14:01.296 "compare": false, 00:14:01.296 "compare_and_write": false, 00:14:01.296 "abort": true, 00:14:01.296 "seek_hole": false, 00:14:01.296 "seek_data": false, 00:14:01.296 "copy": true, 00:14:01.296 "nvme_iov_md": false 00:14:01.296 }, 00:14:01.296 "memory_domains": [ 00:14:01.296 { 00:14:01.296 "dma_device_id": "system", 00:14:01.296 "dma_device_type": 1 00:14:01.296 }, 00:14:01.296 { 00:14:01.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.296 "dma_device_type": 2 00:14:01.296 } 00:14:01.296 ], 00:14:01.296 "driver_specific": {} 00:14:01.296 } 00:14:01.296 ] 00:14:01.296 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:01.296 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:01.296 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:01.296 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:01.296 BaseBdev3 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:01.296 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.554 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:01.813 [ 00:14:01.813 { 00:14:01.813 "name": "BaseBdev3", 00:14:01.813 "aliases": [ 00:14:01.813 "483d4799-56ef-4499-9c72-f96a4360c533" 00:14:01.813 ], 00:14:01.813 "product_name": "Malloc disk", 00:14:01.813 "block_size": 512, 00:14:01.813 "num_blocks": 65536, 00:14:01.813 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:01.813 "assigned_rate_limits": { 00:14:01.813 "rw_ios_per_sec": 0, 00:14:01.813 "rw_mbytes_per_sec": 0, 00:14:01.813 "r_mbytes_per_sec": 0, 00:14:01.813 "w_mbytes_per_sec": 0 00:14:01.813 }, 00:14:01.813 "claimed": false, 00:14:01.813 "zoned": false, 00:14:01.813 "supported_io_types": { 00:14:01.813 "read": true, 00:14:01.813 "write": true, 00:14:01.813 "unmap": true, 00:14:01.813 "flush": true, 00:14:01.813 "reset": true, 00:14:01.813 "nvme_admin": false, 00:14:01.813 "nvme_io": false, 00:14:01.813 "nvme_io_md": false, 00:14:01.813 "write_zeroes": true, 00:14:01.813 "zcopy": true, 00:14:01.813 "get_zone_info": false, 00:14:01.813 "zone_management": false, 00:14:01.813 "zone_append": false, 00:14:01.813 "compare": false, 00:14:01.813 "compare_and_write": false, 00:14:01.813 "abort": true, 00:14:01.813 "seek_hole": false, 00:14:01.813 "seek_data": false, 00:14:01.813 "copy": true, 00:14:01.813 "nvme_iov_md": false 00:14:01.813 }, 00:14:01.813 "memory_domains": [ 00:14:01.813 { 00:14:01.813 "dma_device_id": "system", 00:14:01.813 "dma_device_type": 1 00:14:01.813 }, 00:14:01.813 { 00:14:01.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.813 "dma_device_type": 2 00:14:01.813 } 00:14:01.813 ], 00:14:01.813 "driver_specific": {} 00:14:01.813 } 00:14:01.813 ] 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:01.813 [2024-07-15 10:21:26.541016] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.813 [2024-07-15 10:21:26.541046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.813 [2024-07-15 10:21:26.541062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.813 [2024-07-15 10:21:26.542010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.813 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.072 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.072 "name": "Existed_Raid", 00:14:02.072 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:02.072 "strip_size_kb": 0, 00:14:02.072 "state": "configuring", 00:14:02.072 "raid_level": "raid1", 00:14:02.072 "superblock": true, 00:14:02.072 "num_base_bdevs": 3, 00:14:02.072 "num_base_bdevs_discovered": 2, 00:14:02.072 "num_base_bdevs_operational": 3, 00:14:02.072 "base_bdevs_list": [ 00:14:02.072 { 00:14:02.072 "name": "BaseBdev1", 00:14:02.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.072 "is_configured": false, 00:14:02.072 "data_offset": 0, 00:14:02.072 "data_size": 0 00:14:02.072 }, 00:14:02.072 { 00:14:02.072 "name": "BaseBdev2", 00:14:02.072 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:02.072 "is_configured": true, 00:14:02.072 "data_offset": 2048, 00:14:02.072 "data_size": 63488 00:14:02.072 }, 00:14:02.072 { 00:14:02.072 "name": "BaseBdev3", 00:14:02.072 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:02.072 "is_configured": true, 00:14:02.072 "data_offset": 2048, 00:14:02.072 "data_size": 63488 00:14:02.072 } 00:14:02.072 ] 00:14:02.072 }' 00:14:02.072 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.072 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:02.639 [2024-07-15 10:21:27.359116] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.639 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.899 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:02.899 "name": "Existed_Raid", 00:14:02.899 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:02.899 "strip_size_kb": 0, 00:14:02.899 "state": "configuring", 00:14:02.899 "raid_level": "raid1", 00:14:02.899 "superblock": true, 00:14:02.899 "num_base_bdevs": 3, 00:14:02.899 "num_base_bdevs_discovered": 1, 00:14:02.899 "num_base_bdevs_operational": 3, 00:14:02.899 "base_bdevs_list": [ 00:14:02.899 { 00:14:02.899 "name": "BaseBdev1", 00:14:02.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.899 "is_configured": false, 00:14:02.899 "data_offset": 0, 00:14:02.899 "data_size": 0 00:14:02.899 }, 00:14:02.899 { 00:14:02.899 "name": null, 00:14:02.899 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:02.899 "is_configured": false, 00:14:02.899 "data_offset": 2048, 00:14:02.899 "data_size": 63488 00:14:02.899 }, 00:14:02.899 { 00:14:02.899 "name": "BaseBdev3", 00:14:02.899 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:02.899 "is_configured": true, 00:14:02.899 "data_offset": 2048, 00:14:02.899 "data_size": 63488 00:14:02.899 } 00:14:02.899 ] 00:14:02.899 }' 00:14:02.899 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:02.899 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.467 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.467 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:03.467 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:03.467 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.725 [2024-07-15 10:21:28.356463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.725 BaseBdev1 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:03.725 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.984 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.984 [ 00:14:03.984 { 00:14:03.984 "name": "BaseBdev1", 00:14:03.984 "aliases": [ 00:14:03.984 "edcf8e28-bfe3-4518-a6c4-5926b412c78e" 00:14:03.984 ], 00:14:03.984 "product_name": "Malloc disk", 00:14:03.984 "block_size": 512, 00:14:03.984 "num_blocks": 65536, 00:14:03.984 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:03.984 "assigned_rate_limits": { 00:14:03.984 "rw_ios_per_sec": 0, 00:14:03.984 "rw_mbytes_per_sec": 0, 00:14:03.984 "r_mbytes_per_sec": 0, 00:14:03.984 "w_mbytes_per_sec": 0 00:14:03.984 }, 00:14:03.984 "claimed": true, 00:14:03.984 "claim_type": "exclusive_write", 00:14:03.984 "zoned": false, 00:14:03.984 "supported_io_types": { 00:14:03.984 "read": true, 00:14:03.984 "write": true, 00:14:03.984 "unmap": true, 00:14:03.984 "flush": true, 00:14:03.984 "reset": true, 00:14:03.984 "nvme_admin": false, 00:14:03.984 "nvme_io": false, 00:14:03.984 "nvme_io_md": false, 00:14:03.984 "write_zeroes": true, 00:14:03.984 "zcopy": true, 00:14:03.984 "get_zone_info": false, 00:14:03.984 "zone_management": false, 00:14:03.984 "zone_append": false, 00:14:03.984 "compare": false, 00:14:03.984 "compare_and_write": false, 00:14:03.984 "abort": true, 00:14:03.984 "seek_hole": false, 00:14:03.985 "seek_data": false, 00:14:03.985 "copy": true, 00:14:03.985 "nvme_iov_md": false 00:14:03.985 }, 00:14:03.985 "memory_domains": [ 00:14:03.985 { 00:14:03.985 "dma_device_id": "system", 00:14:03.985 "dma_device_type": 1 00:14:03.985 }, 00:14:03.985 { 00:14:03.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.985 "dma_device_type": 2 00:14:03.985 } 00:14:03.985 ], 00:14:03.985 "driver_specific": {} 00:14:03.985 } 00:14:03.985 ] 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.985 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.244 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.244 "name": "Existed_Raid", 00:14:04.244 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:04.244 "strip_size_kb": 0, 00:14:04.244 "state": "configuring", 00:14:04.244 "raid_level": "raid1", 00:14:04.244 "superblock": true, 00:14:04.244 "num_base_bdevs": 3, 00:14:04.244 "num_base_bdevs_discovered": 2, 00:14:04.244 "num_base_bdevs_operational": 3, 00:14:04.244 "base_bdevs_list": [ 00:14:04.244 { 00:14:04.244 "name": "BaseBdev1", 00:14:04.244 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:04.244 "is_configured": true, 00:14:04.244 "data_offset": 2048, 00:14:04.244 "data_size": 63488 00:14:04.244 }, 00:14:04.244 { 00:14:04.244 "name": null, 00:14:04.244 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:04.244 "is_configured": false, 00:14:04.244 "data_offset": 2048, 00:14:04.244 "data_size": 63488 00:14:04.244 }, 00:14:04.244 { 00:14:04.244 "name": "BaseBdev3", 00:14:04.244 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:04.244 "is_configured": true, 00:14:04.244 "data_offset": 2048, 00:14:04.244 "data_size": 63488 00:14:04.244 } 00:14:04.244 ] 00:14:04.244 }' 00:14:04.244 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.244 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.809 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:04.809 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.809 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:04.809 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:05.066 [2024-07-15 10:21:29.656009] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.066 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.324 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.324 "name": "Existed_Raid", 00:14:05.324 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:05.324 "strip_size_kb": 0, 00:14:05.324 "state": "configuring", 00:14:05.324 "raid_level": "raid1", 00:14:05.324 "superblock": true, 00:14:05.324 "num_base_bdevs": 3, 00:14:05.324 "num_base_bdevs_discovered": 1, 00:14:05.324 "num_base_bdevs_operational": 3, 00:14:05.324 "base_bdevs_list": [ 00:14:05.324 { 00:14:05.324 "name": "BaseBdev1", 00:14:05.324 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:05.324 "is_configured": true, 00:14:05.324 "data_offset": 2048, 00:14:05.324 "data_size": 63488 00:14:05.324 }, 00:14:05.324 { 00:14:05.324 "name": null, 00:14:05.324 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:05.324 "is_configured": false, 00:14:05.324 "data_offset": 2048, 00:14:05.324 "data_size": 63488 00:14:05.324 }, 00:14:05.324 { 00:14:05.324 "name": null, 00:14:05.324 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:05.324 "is_configured": false, 00:14:05.324 "data_offset": 2048, 00:14:05.324 "data_size": 63488 00:14:05.324 } 00:14:05.324 ] 00:14:05.324 }' 00:14:05.324 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.324 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.582 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.582 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:05.840 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:05.840 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:06.098 [2024-07-15 10:21:30.686668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.098 "name": "Existed_Raid", 00:14:06.098 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:06.098 "strip_size_kb": 0, 00:14:06.098 "state": "configuring", 00:14:06.098 "raid_level": "raid1", 00:14:06.098 "superblock": true, 00:14:06.098 "num_base_bdevs": 3, 00:14:06.098 "num_base_bdevs_discovered": 2, 00:14:06.098 "num_base_bdevs_operational": 3, 00:14:06.098 "base_bdevs_list": [ 00:14:06.098 { 00:14:06.098 "name": "BaseBdev1", 00:14:06.098 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:06.098 "is_configured": true, 00:14:06.098 "data_offset": 2048, 00:14:06.098 "data_size": 63488 00:14:06.098 }, 00:14:06.098 { 00:14:06.098 "name": null, 00:14:06.098 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:06.098 "is_configured": false, 00:14:06.098 "data_offset": 2048, 00:14:06.098 "data_size": 63488 00:14:06.098 }, 00:14:06.098 { 00:14:06.098 "name": "BaseBdev3", 00:14:06.098 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:06.098 "is_configured": true, 00:14:06.098 "data_offset": 2048, 00:14:06.098 "data_size": 63488 00:14:06.098 } 00:14:06.098 ] 00:14:06.098 }' 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.098 10:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.664 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.664 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.941 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:06.941 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:06.941 [2024-07-15 10:21:31.697291] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.254 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.255 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.255 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.255 "name": "Existed_Raid", 00:14:07.255 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:07.255 "strip_size_kb": 0, 00:14:07.255 "state": "configuring", 00:14:07.255 "raid_level": "raid1", 00:14:07.255 "superblock": true, 00:14:07.255 "num_base_bdevs": 3, 00:14:07.255 "num_base_bdevs_discovered": 1, 00:14:07.255 "num_base_bdevs_operational": 3, 00:14:07.255 "base_bdevs_list": [ 00:14:07.255 { 00:14:07.255 "name": null, 00:14:07.255 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:07.255 "is_configured": false, 00:14:07.255 "data_offset": 2048, 00:14:07.255 "data_size": 63488 00:14:07.255 }, 00:14:07.255 { 00:14:07.255 "name": null, 00:14:07.255 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:07.255 "is_configured": false, 00:14:07.255 "data_offset": 2048, 00:14:07.255 "data_size": 63488 00:14:07.255 }, 00:14:07.255 { 00:14:07.255 "name": "BaseBdev3", 00:14:07.255 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:07.255 "is_configured": true, 00:14:07.255 "data_offset": 2048, 00:14:07.255 "data_size": 63488 00:14:07.255 } 00:14:07.255 ] 00:14:07.255 }' 00:14:07.255 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.255 10:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.819 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.819 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.819 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:07.819 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:08.077 [2024-07-15 10:21:32.725716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.077 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.335 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.335 "name": "Existed_Raid", 00:14:08.335 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:08.335 "strip_size_kb": 0, 00:14:08.335 "state": "configuring", 00:14:08.335 "raid_level": "raid1", 00:14:08.336 "superblock": true, 00:14:08.336 "num_base_bdevs": 3, 00:14:08.336 "num_base_bdevs_discovered": 2, 00:14:08.336 "num_base_bdevs_operational": 3, 00:14:08.336 "base_bdevs_list": [ 00:14:08.336 { 00:14:08.336 "name": null, 00:14:08.336 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:08.336 "is_configured": false, 00:14:08.336 "data_offset": 2048, 00:14:08.336 "data_size": 63488 00:14:08.336 }, 00:14:08.336 { 00:14:08.336 "name": "BaseBdev2", 00:14:08.336 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:08.336 "is_configured": true, 00:14:08.336 "data_offset": 2048, 00:14:08.336 "data_size": 63488 00:14:08.336 }, 00:14:08.336 { 00:14:08.336 "name": "BaseBdev3", 00:14:08.336 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:08.336 "is_configured": true, 00:14:08.336 "data_offset": 2048, 00:14:08.336 "data_size": 63488 00:14:08.336 } 00:14:08.336 ] 00:14:08.336 }' 00:14:08.336 10:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.336 10:21:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.902 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.902 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.902 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:08.902 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.902 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u edcf8e28-bfe3-4518-a6c4-5926b412c78e 00:14:09.160 [2024-07-15 10:21:33.903521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:09.160 [2024-07-15 10:21:33.903623] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26ad080 00:14:09.160 [2024-07-15 10:21:33.903648] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:09.160 [2024-07-15 10:21:33.903765] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26a3760 00:14:09.160 [2024-07-15 10:21:33.903846] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26ad080 00:14:09.160 [2024-07-15 10:21:33.903853] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x26ad080 00:14:09.160 [2024-07-15 10:21:33.903929] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.160 NewBaseBdev 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.160 10:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.419 10:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:09.676 [ 00:14:09.676 { 00:14:09.676 "name": "NewBaseBdev", 00:14:09.676 "aliases": [ 00:14:09.676 "edcf8e28-bfe3-4518-a6c4-5926b412c78e" 00:14:09.676 ], 00:14:09.676 "product_name": "Malloc disk", 00:14:09.676 "block_size": 512, 00:14:09.676 "num_blocks": 65536, 00:14:09.676 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:09.676 "assigned_rate_limits": { 00:14:09.676 "rw_ios_per_sec": 0, 00:14:09.676 "rw_mbytes_per_sec": 0, 00:14:09.676 "r_mbytes_per_sec": 0, 00:14:09.676 "w_mbytes_per_sec": 0 00:14:09.676 }, 00:14:09.676 "claimed": true, 00:14:09.676 "claim_type": "exclusive_write", 00:14:09.676 "zoned": false, 00:14:09.676 "supported_io_types": { 00:14:09.676 "read": true, 00:14:09.676 "write": true, 00:14:09.676 "unmap": true, 00:14:09.676 "flush": true, 00:14:09.676 "reset": true, 00:14:09.676 "nvme_admin": false, 00:14:09.676 "nvme_io": false, 00:14:09.676 "nvme_io_md": false, 00:14:09.676 "write_zeroes": true, 00:14:09.676 "zcopy": true, 00:14:09.676 "get_zone_info": false, 00:14:09.676 "zone_management": false, 00:14:09.676 "zone_append": false, 00:14:09.677 "compare": false, 00:14:09.677 "compare_and_write": false, 00:14:09.677 "abort": true, 00:14:09.677 "seek_hole": false, 00:14:09.677 "seek_data": false, 00:14:09.677 "copy": true, 00:14:09.677 "nvme_iov_md": false 00:14:09.677 }, 00:14:09.677 "memory_domains": [ 00:14:09.677 { 00:14:09.677 "dma_device_id": "system", 00:14:09.677 "dma_device_type": 1 00:14:09.677 }, 00:14:09.677 { 00:14:09.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.677 "dma_device_type": 2 00:14:09.677 } 00:14:09.677 ], 00:14:09.677 "driver_specific": {} 00:14:09.677 } 00:14:09.677 ] 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.677 "name": "Existed_Raid", 00:14:09.677 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:09.677 "strip_size_kb": 0, 00:14:09.677 "state": "online", 00:14:09.677 "raid_level": "raid1", 00:14:09.677 "superblock": true, 00:14:09.677 "num_base_bdevs": 3, 00:14:09.677 "num_base_bdevs_discovered": 3, 00:14:09.677 "num_base_bdevs_operational": 3, 00:14:09.677 "base_bdevs_list": [ 00:14:09.677 { 00:14:09.677 "name": "NewBaseBdev", 00:14:09.677 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:09.677 "is_configured": true, 00:14:09.677 "data_offset": 2048, 00:14:09.677 "data_size": 63488 00:14:09.677 }, 00:14:09.677 { 00:14:09.677 "name": "BaseBdev2", 00:14:09.677 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:09.677 "is_configured": true, 00:14:09.677 "data_offset": 2048, 00:14:09.677 "data_size": 63488 00:14:09.677 }, 00:14:09.677 { 00:14:09.677 "name": "BaseBdev3", 00:14:09.677 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:09.677 "is_configured": true, 00:14:09.677 "data_offset": 2048, 00:14:09.677 "data_size": 63488 00:14:09.677 } 00:14:09.677 ] 00:14:09.677 }' 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.677 10:21:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:10.241 10:21:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:10.241 [2024-07-15 10:21:35.018571] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:10.499 "name": "Existed_Raid", 00:14:10.499 "aliases": [ 00:14:10.499 "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8" 00:14:10.499 ], 00:14:10.499 "product_name": "Raid Volume", 00:14:10.499 "block_size": 512, 00:14:10.499 "num_blocks": 63488, 00:14:10.499 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:10.499 "assigned_rate_limits": { 00:14:10.499 "rw_ios_per_sec": 0, 00:14:10.499 "rw_mbytes_per_sec": 0, 00:14:10.499 "r_mbytes_per_sec": 0, 00:14:10.499 "w_mbytes_per_sec": 0 00:14:10.499 }, 00:14:10.499 "claimed": false, 00:14:10.499 "zoned": false, 00:14:10.499 "supported_io_types": { 00:14:10.499 "read": true, 00:14:10.499 "write": true, 00:14:10.499 "unmap": false, 00:14:10.499 "flush": false, 00:14:10.499 "reset": true, 00:14:10.499 "nvme_admin": false, 00:14:10.499 "nvme_io": false, 00:14:10.499 "nvme_io_md": false, 00:14:10.499 "write_zeroes": true, 00:14:10.499 "zcopy": false, 00:14:10.499 "get_zone_info": false, 00:14:10.499 "zone_management": false, 00:14:10.499 "zone_append": false, 00:14:10.499 "compare": false, 00:14:10.499 "compare_and_write": false, 00:14:10.499 "abort": false, 00:14:10.499 "seek_hole": false, 00:14:10.499 "seek_data": false, 00:14:10.499 "copy": false, 00:14:10.499 "nvme_iov_md": false 00:14:10.499 }, 00:14:10.499 "memory_domains": [ 00:14:10.499 { 00:14:10.499 "dma_device_id": "system", 00:14:10.499 "dma_device_type": 1 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.499 "dma_device_type": 2 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "dma_device_id": "system", 00:14:10.499 "dma_device_type": 1 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.499 "dma_device_type": 2 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "dma_device_id": "system", 00:14:10.499 "dma_device_type": 1 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.499 "dma_device_type": 2 00:14:10.499 } 00:14:10.499 ], 00:14:10.499 "driver_specific": { 00:14:10.499 "raid": { 00:14:10.499 "uuid": "1685a1e9-0662-49e8-8c4f-9b5bfcd4fab8", 00:14:10.499 "strip_size_kb": 0, 00:14:10.499 "state": "online", 00:14:10.499 "raid_level": "raid1", 00:14:10.499 "superblock": true, 00:14:10.499 "num_base_bdevs": 3, 00:14:10.499 "num_base_bdevs_discovered": 3, 00:14:10.499 "num_base_bdevs_operational": 3, 00:14:10.499 "base_bdevs_list": [ 00:14:10.499 { 00:14:10.499 "name": "NewBaseBdev", 00:14:10.499 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:10.499 "is_configured": true, 00:14:10.499 "data_offset": 2048, 00:14:10.499 "data_size": 63488 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "name": "BaseBdev2", 00:14:10.499 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:10.499 "is_configured": true, 00:14:10.499 "data_offset": 2048, 00:14:10.499 "data_size": 63488 00:14:10.499 }, 00:14:10.499 { 00:14:10.499 "name": "BaseBdev3", 00:14:10.499 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:10.499 "is_configured": true, 00:14:10.499 "data_offset": 2048, 00:14:10.499 "data_size": 63488 00:14:10.499 } 00:14:10.499 ] 00:14:10.499 } 00:14:10.499 } 00:14:10.499 }' 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:10.499 BaseBdev2 00:14:10.499 BaseBdev3' 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.499 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.499 "name": "NewBaseBdev", 00:14:10.499 "aliases": [ 00:14:10.499 "edcf8e28-bfe3-4518-a6c4-5926b412c78e" 00:14:10.499 ], 00:14:10.499 "product_name": "Malloc disk", 00:14:10.499 "block_size": 512, 00:14:10.499 "num_blocks": 65536, 00:14:10.499 "uuid": "edcf8e28-bfe3-4518-a6c4-5926b412c78e", 00:14:10.499 "assigned_rate_limits": { 00:14:10.499 "rw_ios_per_sec": 0, 00:14:10.499 "rw_mbytes_per_sec": 0, 00:14:10.499 "r_mbytes_per_sec": 0, 00:14:10.499 "w_mbytes_per_sec": 0 00:14:10.499 }, 00:14:10.499 "claimed": true, 00:14:10.499 "claim_type": "exclusive_write", 00:14:10.499 "zoned": false, 00:14:10.499 "supported_io_types": { 00:14:10.499 "read": true, 00:14:10.499 "write": true, 00:14:10.499 "unmap": true, 00:14:10.499 "flush": true, 00:14:10.499 "reset": true, 00:14:10.499 "nvme_admin": false, 00:14:10.499 "nvme_io": false, 00:14:10.499 "nvme_io_md": false, 00:14:10.499 "write_zeroes": true, 00:14:10.499 "zcopy": true, 00:14:10.499 "get_zone_info": false, 00:14:10.499 "zone_management": false, 00:14:10.499 "zone_append": false, 00:14:10.499 "compare": false, 00:14:10.499 "compare_and_write": false, 00:14:10.499 "abort": true, 00:14:10.499 "seek_hole": false, 00:14:10.499 "seek_data": false, 00:14:10.499 "copy": true, 00:14:10.499 "nvme_iov_md": false 00:14:10.499 }, 00:14:10.499 "memory_domains": [ 00:14:10.499 { 00:14:10.499 "dma_device_id": "system", 00:14:10.500 "dma_device_type": 1 00:14:10.500 }, 00:14:10.500 { 00:14:10.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.500 "dma_device_type": 2 00:14:10.500 } 00:14:10.500 ], 00:14:10.500 "driver_specific": {} 00:14:10.500 }' 00:14:10.500 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.500 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:10.757 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.014 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.014 "name": "BaseBdev2", 00:14:11.014 "aliases": [ 00:14:11.014 "f535c9e9-16d3-47ef-ae42-b95abda817ee" 00:14:11.014 ], 00:14:11.014 "product_name": "Malloc disk", 00:14:11.014 "block_size": 512, 00:14:11.014 "num_blocks": 65536, 00:14:11.014 "uuid": "f535c9e9-16d3-47ef-ae42-b95abda817ee", 00:14:11.014 "assigned_rate_limits": { 00:14:11.014 "rw_ios_per_sec": 0, 00:14:11.014 "rw_mbytes_per_sec": 0, 00:14:11.014 "r_mbytes_per_sec": 0, 00:14:11.014 "w_mbytes_per_sec": 0 00:14:11.014 }, 00:14:11.014 "claimed": true, 00:14:11.015 "claim_type": "exclusive_write", 00:14:11.015 "zoned": false, 00:14:11.015 "supported_io_types": { 00:14:11.015 "read": true, 00:14:11.015 "write": true, 00:14:11.015 "unmap": true, 00:14:11.015 "flush": true, 00:14:11.015 "reset": true, 00:14:11.015 "nvme_admin": false, 00:14:11.015 "nvme_io": false, 00:14:11.015 "nvme_io_md": false, 00:14:11.015 "write_zeroes": true, 00:14:11.015 "zcopy": true, 00:14:11.015 "get_zone_info": false, 00:14:11.015 "zone_management": false, 00:14:11.015 "zone_append": false, 00:14:11.015 "compare": false, 00:14:11.015 "compare_and_write": false, 00:14:11.015 "abort": true, 00:14:11.015 "seek_hole": false, 00:14:11.015 "seek_data": false, 00:14:11.015 "copy": true, 00:14:11.015 "nvme_iov_md": false 00:14:11.015 }, 00:14:11.015 "memory_domains": [ 00:14:11.015 { 00:14:11.015 "dma_device_id": "system", 00:14:11.015 "dma_device_type": 1 00:14:11.015 }, 00:14:11.015 { 00:14:11.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.015 "dma_device_type": 2 00:14:11.015 } 00:14:11.015 ], 00:14:11.015 "driver_specific": {} 00:14:11.015 }' 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.015 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:11.272 10:21:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:11.553 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:11.553 "name": "BaseBdev3", 00:14:11.553 "aliases": [ 00:14:11.553 "483d4799-56ef-4499-9c72-f96a4360c533" 00:14:11.553 ], 00:14:11.553 "product_name": "Malloc disk", 00:14:11.553 "block_size": 512, 00:14:11.553 "num_blocks": 65536, 00:14:11.553 "uuid": "483d4799-56ef-4499-9c72-f96a4360c533", 00:14:11.553 "assigned_rate_limits": { 00:14:11.553 "rw_ios_per_sec": 0, 00:14:11.553 "rw_mbytes_per_sec": 0, 00:14:11.553 "r_mbytes_per_sec": 0, 00:14:11.553 "w_mbytes_per_sec": 0 00:14:11.553 }, 00:14:11.553 "claimed": true, 00:14:11.553 "claim_type": "exclusive_write", 00:14:11.553 "zoned": false, 00:14:11.553 "supported_io_types": { 00:14:11.553 "read": true, 00:14:11.553 "write": true, 00:14:11.553 "unmap": true, 00:14:11.553 "flush": true, 00:14:11.553 "reset": true, 00:14:11.553 "nvme_admin": false, 00:14:11.553 "nvme_io": false, 00:14:11.553 "nvme_io_md": false, 00:14:11.553 "write_zeroes": true, 00:14:11.553 "zcopy": true, 00:14:11.553 "get_zone_info": false, 00:14:11.553 "zone_management": false, 00:14:11.553 "zone_append": false, 00:14:11.553 "compare": false, 00:14:11.553 "compare_and_write": false, 00:14:11.554 "abort": true, 00:14:11.554 "seek_hole": false, 00:14:11.554 "seek_data": false, 00:14:11.554 "copy": true, 00:14:11.554 "nvme_iov_md": false 00:14:11.554 }, 00:14:11.554 "memory_domains": [ 00:14:11.554 { 00:14:11.554 "dma_device_id": "system", 00:14:11.554 "dma_device_type": 1 00:14:11.554 }, 00:14:11.554 { 00:14:11.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.554 "dma_device_type": 2 00:14:11.554 } 00:14:11.554 ], 00:14:11.554 "driver_specific": {} 00:14:11.554 }' 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.554 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:11.812 [2024-07-15 10:21:36.566407] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.812 [2024-07-15 10:21:36.566426] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.812 [2024-07-15 10:21:36.566464] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.812 [2024-07-15 10:21:36.566644] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.812 [2024-07-15 10:21:36.566652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ad080 name Existed_Raid, state offline 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1794267 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1794267 ']' 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1794267 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.812 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1794267 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1794267' 00:14:12.071 killing process with pid 1794267 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1794267 00:14:12.071 [2024-07-15 10:21:36.644375] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1794267 00:14:12.071 [2024-07-15 10:21:36.667256] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:12.071 00:14:12.071 real 0m21.261s 00:14:12.071 user 0m38.759s 00:14:12.071 sys 0m4.140s 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.071 10:21:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.071 ************************************ 00:14:12.071 END TEST raid_state_function_test_sb 00:14:12.071 ************************************ 00:14:12.328 10:21:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:12.328 10:21:36 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:12.328 10:21:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:12.328 10:21:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.328 10:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.328 ************************************ 00:14:12.328 START TEST raid_superblock_test 00:14:12.328 ************************************ 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:12.328 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1798577 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1798577 /var/tmp/spdk-raid.sock 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1798577 ']' 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:12.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.329 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.329 [2024-07-15 10:21:36.983455] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:12.329 [2024-07-15 10:21:36.983498] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798577 ] 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:12.329 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:12.329 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:12.329 [2024-07-15 10:21:37.073252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.587 [2024-07-15 10:21:37.143552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.587 [2024-07-15 10:21:37.194333] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.587 [2024-07-15 10:21:37.194362] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.152 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:13.152 malloc1 00:14:13.409 10:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:13.409 [2024-07-15 10:21:38.103154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:13.409 [2024-07-15 10:21:38.103191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.409 [2024-07-15 10:21:38.103205] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26e42f0 00:14:13.409 [2024-07-15 10:21:38.103214] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.409 [2024-07-15 10:21:38.104349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.409 [2024-07-15 10:21:38.104372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:13.409 pt1 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.409 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:13.666 malloc2 00:14:13.666 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:13.924 [2024-07-15 10:21:38.463827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:13.924 [2024-07-15 10:21:38.463865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.924 [2024-07-15 10:21:38.463876] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26e56d0 00:14:13.924 [2024-07-15 10:21:38.463884] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.924 [2024-07-15 10:21:38.464911] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.924 [2024-07-15 10:21:38.464934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:13.924 pt2 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:13.924 malloc3 00:14:13.924 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.180 [2024-07-15 10:21:38.808265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.180 [2024-07-15 10:21:38.808298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.180 [2024-07-15 10:21:38.808308] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x287e6b0 00:14:14.180 [2024-07-15 10:21:38.808317] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.180 [2024-07-15 10:21:38.809318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.180 [2024-07-15 10:21:38.809341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.180 pt3 00:14:14.180 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:14.180 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:14.180 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:14.438 [2024-07-15 10:21:38.980720] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.438 [2024-07-15 10:21:38.981515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.439 [2024-07-15 10:21:38.981552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:14.439 [2024-07-15 10:21:38.981650] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x287ecb0 00:14:14.439 [2024-07-15 10:21:38.981658] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.439 [2024-07-15 10:21:38.981780] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x287e5a0 00:14:14.439 [2024-07-15 10:21:38.981877] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x287ecb0 00:14:14.439 [2024-07-15 10:21:38.981884] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x287ecb0 00:14:14.439 [2024-07-15 10:21:38.981950] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.439 10:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.439 "name": "raid_bdev1", 00:14:14.439 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:14.439 "strip_size_kb": 0, 00:14:14.439 "state": "online", 00:14:14.439 "raid_level": "raid1", 00:14:14.439 "superblock": true, 00:14:14.439 "num_base_bdevs": 3, 00:14:14.439 "num_base_bdevs_discovered": 3, 00:14:14.439 "num_base_bdevs_operational": 3, 00:14:14.439 "base_bdevs_list": [ 00:14:14.439 { 00:14:14.439 "name": "pt1", 00:14:14.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.439 "is_configured": true, 00:14:14.439 "data_offset": 2048, 00:14:14.439 "data_size": 63488 00:14:14.439 }, 00:14:14.439 { 00:14:14.439 "name": "pt2", 00:14:14.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.439 "is_configured": true, 00:14:14.439 "data_offset": 2048, 00:14:14.439 "data_size": 63488 00:14:14.439 }, 00:14:14.439 { 00:14:14.439 "name": "pt3", 00:14:14.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.439 "is_configured": true, 00:14:14.439 "data_offset": 2048, 00:14:14.439 "data_size": 63488 00:14:14.439 } 00:14:14.439 ] 00:14:14.439 }' 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.439 10:21:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:15.005 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:15.264 [2024-07-15 10:21:39.807012] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.264 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:15.264 "name": "raid_bdev1", 00:14:15.264 "aliases": [ 00:14:15.264 "4df85607-55e6-42ba-b665-cefc9827e3e8" 00:14:15.264 ], 00:14:15.264 "product_name": "Raid Volume", 00:14:15.264 "block_size": 512, 00:14:15.264 "num_blocks": 63488, 00:14:15.264 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:15.264 "assigned_rate_limits": { 00:14:15.264 "rw_ios_per_sec": 0, 00:14:15.264 "rw_mbytes_per_sec": 0, 00:14:15.264 "r_mbytes_per_sec": 0, 00:14:15.264 "w_mbytes_per_sec": 0 00:14:15.264 }, 00:14:15.264 "claimed": false, 00:14:15.264 "zoned": false, 00:14:15.264 "supported_io_types": { 00:14:15.264 "read": true, 00:14:15.264 "write": true, 00:14:15.264 "unmap": false, 00:14:15.264 "flush": false, 00:14:15.264 "reset": true, 00:14:15.264 "nvme_admin": false, 00:14:15.264 "nvme_io": false, 00:14:15.264 "nvme_io_md": false, 00:14:15.264 "write_zeroes": true, 00:14:15.264 "zcopy": false, 00:14:15.264 "get_zone_info": false, 00:14:15.264 "zone_management": false, 00:14:15.264 "zone_append": false, 00:14:15.264 "compare": false, 00:14:15.264 "compare_and_write": false, 00:14:15.264 "abort": false, 00:14:15.264 "seek_hole": false, 00:14:15.264 "seek_data": false, 00:14:15.264 "copy": false, 00:14:15.264 "nvme_iov_md": false 00:14:15.264 }, 00:14:15.264 "memory_domains": [ 00:14:15.264 { 00:14:15.264 "dma_device_id": "system", 00:14:15.264 "dma_device_type": 1 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.264 "dma_device_type": 2 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "dma_device_id": "system", 00:14:15.264 "dma_device_type": 1 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.264 "dma_device_type": 2 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "dma_device_id": "system", 00:14:15.264 "dma_device_type": 1 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.264 "dma_device_type": 2 00:14:15.264 } 00:14:15.264 ], 00:14:15.264 "driver_specific": { 00:14:15.264 "raid": { 00:14:15.264 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:15.264 "strip_size_kb": 0, 00:14:15.264 "state": "online", 00:14:15.264 "raid_level": "raid1", 00:14:15.264 "superblock": true, 00:14:15.264 "num_base_bdevs": 3, 00:14:15.264 "num_base_bdevs_discovered": 3, 00:14:15.264 "num_base_bdevs_operational": 3, 00:14:15.264 "base_bdevs_list": [ 00:14:15.264 { 00:14:15.264 "name": "pt1", 00:14:15.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.264 "is_configured": true, 00:14:15.264 "data_offset": 2048, 00:14:15.264 "data_size": 63488 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "name": "pt2", 00:14:15.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.264 "is_configured": true, 00:14:15.264 "data_offset": 2048, 00:14:15.264 "data_size": 63488 00:14:15.264 }, 00:14:15.264 { 00:14:15.264 "name": "pt3", 00:14:15.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.264 "is_configured": true, 00:14:15.264 "data_offset": 2048, 00:14:15.264 "data_size": 63488 00:14:15.264 } 00:14:15.264 ] 00:14:15.264 } 00:14:15.264 } 00:14:15.264 }' 00:14:15.264 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.264 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:15.264 pt2 00:14:15.264 pt3' 00:14:15.264 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.264 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:15.265 10:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.538 "name": "pt1", 00:14:15.538 "aliases": [ 00:14:15.538 "00000000-0000-0000-0000-000000000001" 00:14:15.538 ], 00:14:15.538 "product_name": "passthru", 00:14:15.538 "block_size": 512, 00:14:15.538 "num_blocks": 65536, 00:14:15.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.538 "assigned_rate_limits": { 00:14:15.538 "rw_ios_per_sec": 0, 00:14:15.538 "rw_mbytes_per_sec": 0, 00:14:15.538 "r_mbytes_per_sec": 0, 00:14:15.538 "w_mbytes_per_sec": 0 00:14:15.538 }, 00:14:15.538 "claimed": true, 00:14:15.538 "claim_type": "exclusive_write", 00:14:15.538 "zoned": false, 00:14:15.538 "supported_io_types": { 00:14:15.538 "read": true, 00:14:15.538 "write": true, 00:14:15.538 "unmap": true, 00:14:15.538 "flush": true, 00:14:15.538 "reset": true, 00:14:15.538 "nvme_admin": false, 00:14:15.538 "nvme_io": false, 00:14:15.538 "nvme_io_md": false, 00:14:15.538 "write_zeroes": true, 00:14:15.538 "zcopy": true, 00:14:15.538 "get_zone_info": false, 00:14:15.538 "zone_management": false, 00:14:15.538 "zone_append": false, 00:14:15.538 "compare": false, 00:14:15.538 "compare_and_write": false, 00:14:15.538 "abort": true, 00:14:15.538 "seek_hole": false, 00:14:15.538 "seek_data": false, 00:14:15.538 "copy": true, 00:14:15.538 "nvme_iov_md": false 00:14:15.538 }, 00:14:15.538 "memory_domains": [ 00:14:15.538 { 00:14:15.538 "dma_device_id": "system", 00:14:15.538 "dma_device_type": 1 00:14:15.538 }, 00:14:15.538 { 00:14:15.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.538 "dma_device_type": 2 00:14:15.538 } 00:14:15.538 ], 00:14:15.538 "driver_specific": { 00:14:15.538 "passthru": { 00:14:15.538 "name": "pt1", 00:14:15.538 "base_bdev_name": "malloc1" 00:14:15.538 } 00:14:15.538 } 00:14:15.538 }' 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.538 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.797 "name": "pt2", 00:14:15.797 "aliases": [ 00:14:15.797 "00000000-0000-0000-0000-000000000002" 00:14:15.797 ], 00:14:15.797 "product_name": "passthru", 00:14:15.797 "block_size": 512, 00:14:15.797 "num_blocks": 65536, 00:14:15.797 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.797 "assigned_rate_limits": { 00:14:15.797 "rw_ios_per_sec": 0, 00:14:15.797 "rw_mbytes_per_sec": 0, 00:14:15.797 "r_mbytes_per_sec": 0, 00:14:15.797 "w_mbytes_per_sec": 0 00:14:15.797 }, 00:14:15.797 "claimed": true, 00:14:15.797 "claim_type": "exclusive_write", 00:14:15.797 "zoned": false, 00:14:15.797 "supported_io_types": { 00:14:15.797 "read": true, 00:14:15.797 "write": true, 00:14:15.797 "unmap": true, 00:14:15.797 "flush": true, 00:14:15.797 "reset": true, 00:14:15.797 "nvme_admin": false, 00:14:15.797 "nvme_io": false, 00:14:15.797 "nvme_io_md": false, 00:14:15.797 "write_zeroes": true, 00:14:15.797 "zcopy": true, 00:14:15.797 "get_zone_info": false, 00:14:15.797 "zone_management": false, 00:14:15.797 "zone_append": false, 00:14:15.797 "compare": false, 00:14:15.797 "compare_and_write": false, 00:14:15.797 "abort": true, 00:14:15.797 "seek_hole": false, 00:14:15.797 "seek_data": false, 00:14:15.797 "copy": true, 00:14:15.797 "nvme_iov_md": false 00:14:15.797 }, 00:14:15.797 "memory_domains": [ 00:14:15.797 { 00:14:15.797 "dma_device_id": "system", 00:14:15.797 "dma_device_type": 1 00:14:15.797 }, 00:14:15.797 { 00:14:15.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.797 "dma_device_type": 2 00:14:15.797 } 00:14:15.797 ], 00:14:15.797 "driver_specific": { 00:14:15.797 "passthru": { 00:14:15.797 "name": "pt2", 00:14:15.797 "base_bdev_name": "malloc2" 00:14:15.797 } 00:14:15.797 } 00:14:15.797 }' 00:14:15.797 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.055 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.055 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.056 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.314 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.314 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.314 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:16.314 10:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.314 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.314 "name": "pt3", 00:14:16.314 "aliases": [ 00:14:16.314 "00000000-0000-0000-0000-000000000003" 00:14:16.314 ], 00:14:16.314 "product_name": "passthru", 00:14:16.314 "block_size": 512, 00:14:16.314 "num_blocks": 65536, 00:14:16.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.314 "assigned_rate_limits": { 00:14:16.314 "rw_ios_per_sec": 0, 00:14:16.314 "rw_mbytes_per_sec": 0, 00:14:16.314 "r_mbytes_per_sec": 0, 00:14:16.314 "w_mbytes_per_sec": 0 00:14:16.314 }, 00:14:16.314 "claimed": true, 00:14:16.314 "claim_type": "exclusive_write", 00:14:16.314 "zoned": false, 00:14:16.314 "supported_io_types": { 00:14:16.314 "read": true, 00:14:16.314 "write": true, 00:14:16.314 "unmap": true, 00:14:16.314 "flush": true, 00:14:16.314 "reset": true, 00:14:16.314 "nvme_admin": false, 00:14:16.314 "nvme_io": false, 00:14:16.314 "nvme_io_md": false, 00:14:16.314 "write_zeroes": true, 00:14:16.314 "zcopy": true, 00:14:16.314 "get_zone_info": false, 00:14:16.314 "zone_management": false, 00:14:16.314 "zone_append": false, 00:14:16.314 "compare": false, 00:14:16.314 "compare_and_write": false, 00:14:16.314 "abort": true, 00:14:16.314 "seek_hole": false, 00:14:16.314 "seek_data": false, 00:14:16.314 "copy": true, 00:14:16.314 "nvme_iov_md": false 00:14:16.314 }, 00:14:16.314 "memory_domains": [ 00:14:16.314 { 00:14:16.314 "dma_device_id": "system", 00:14:16.314 "dma_device_type": 1 00:14:16.314 }, 00:14:16.314 { 00:14:16.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.314 "dma_device_type": 2 00:14:16.314 } 00:14:16.314 ], 00:14:16.314 "driver_specific": { 00:14:16.314 "passthru": { 00:14:16.314 "name": "pt3", 00:14:16.314 "base_bdev_name": "malloc3" 00:14:16.314 } 00:14:16.314 } 00:14:16.314 }' 00:14:16.314 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.314 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.314 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.314 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:16.573 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:16.832 [2024-07-15 10:21:41.471288] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.832 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4df85607-55e6-42ba-b665-cefc9827e3e8 00:14:16.832 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 4df85607-55e6-42ba-b665-cefc9827e3e8 ']' 00:14:16.832 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:17.092 [2024-07-15 10:21:41.631511] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.092 [2024-07-15 10:21:41.631528] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.092 [2024-07-15 10:21:41.631568] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.092 [2024-07-15 10:21:41.631617] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.092 [2024-07-15 10:21:41.631625] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x287ecb0 name raid_bdev1, state offline 00:14:17.093 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.093 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:17.093 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:17.093 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:17.093 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.093 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:17.352 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.352 10:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:17.611 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:17.611 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:17.611 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:17.611 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:17.871 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:17.871 [2024-07-15 10:21:42.650104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:17.871 [2024-07-15 10:21:42.651095] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:17.871 [2024-07-15 10:21:42.651126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:17.871 [2024-07-15 10:21:42.651158] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:17.871 [2024-07-15 10:21:42.651186] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:17.871 [2024-07-15 10:21:42.651200] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:17.871 [2024-07-15 10:21:42.651211] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.871 [2024-07-15 10:21:42.651218] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2887d50 name raid_bdev1, state configuring 00:14:17.871 request: 00:14:17.871 { 00:14:17.871 "name": "raid_bdev1", 00:14:17.871 "raid_level": "raid1", 00:14:17.871 "base_bdevs": [ 00:14:17.871 "malloc1", 00:14:17.871 "malloc2", 00:14:17.871 "malloc3" 00:14:17.871 ], 00:14:17.871 "superblock": false, 00:14:17.871 "method": "bdev_raid_create", 00:14:17.871 "req_id": 1 00:14:17.871 } 00:14:17.871 Got JSON-RPC error response 00:14:17.871 response: 00:14:17.871 { 00:14:17.871 "code": -17, 00:14:17.871 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:17.871 } 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:18.129 10:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:18.387 [2024-07-15 10:21:42.994961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:18.387 [2024-07-15 10:21:42.994996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.387 [2024-07-15 10:21:42.995007] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x287bd00 00:14:18.387 [2024-07-15 10:21:42.995015] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.387 [2024-07-15 10:21:42.996136] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.387 [2024-07-15 10:21:42.996159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:18.387 [2024-07-15 10:21:42.996204] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:18.387 [2024-07-15 10:21:42.996222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:18.387 pt1 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.387 "name": "raid_bdev1", 00:14:18.387 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:18.387 "strip_size_kb": 0, 00:14:18.387 "state": "configuring", 00:14:18.387 "raid_level": "raid1", 00:14:18.387 "superblock": true, 00:14:18.387 "num_base_bdevs": 3, 00:14:18.387 "num_base_bdevs_discovered": 1, 00:14:18.387 "num_base_bdevs_operational": 3, 00:14:18.387 "base_bdevs_list": [ 00:14:18.387 { 00:14:18.387 "name": "pt1", 00:14:18.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:18.387 "is_configured": true, 00:14:18.387 "data_offset": 2048, 00:14:18.387 "data_size": 63488 00:14:18.387 }, 00:14:18.387 { 00:14:18.387 "name": null, 00:14:18.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.387 "is_configured": false, 00:14:18.387 "data_offset": 2048, 00:14:18.387 "data_size": 63488 00:14:18.387 }, 00:14:18.387 { 00:14:18.387 "name": null, 00:14:18.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.387 "is_configured": false, 00:14:18.387 "data_offset": 2048, 00:14:18.387 "data_size": 63488 00:14:18.387 } 00:14:18.387 ] 00:14:18.387 }' 00:14:18.387 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.388 10:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.953 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:14:18.953 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:18.953 [2024-07-15 10:21:43.741069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:18.953 [2024-07-15 10:21:43.741108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.953 [2024-07-15 10:21:43.741122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26dac20 00:14:18.953 [2024-07-15 10:21:43.741131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.953 [2024-07-15 10:21:43.741386] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.953 [2024-07-15 10:21:43.741400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:18.953 [2024-07-15 10:21:43.741448] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:18.953 [2024-07-15 10:21:43.741462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:19.212 pt2 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:19.212 [2024-07-15 10:21:43.909510] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.212 10:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.483 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.483 "name": "raid_bdev1", 00:14:19.483 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:19.483 "strip_size_kb": 0, 00:14:19.483 "state": "configuring", 00:14:19.483 "raid_level": "raid1", 00:14:19.483 "superblock": true, 00:14:19.483 "num_base_bdevs": 3, 00:14:19.483 "num_base_bdevs_discovered": 1, 00:14:19.483 "num_base_bdevs_operational": 3, 00:14:19.483 "base_bdevs_list": [ 00:14:19.483 { 00:14:19.483 "name": "pt1", 00:14:19.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:19.483 "is_configured": true, 00:14:19.483 "data_offset": 2048, 00:14:19.483 "data_size": 63488 00:14:19.483 }, 00:14:19.483 { 00:14:19.483 "name": null, 00:14:19.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.483 "is_configured": false, 00:14:19.483 "data_offset": 2048, 00:14:19.483 "data_size": 63488 00:14:19.483 }, 00:14:19.483 { 00:14:19.483 "name": null, 00:14:19.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.483 "is_configured": false, 00:14:19.483 "data_offset": 2048, 00:14:19.483 "data_size": 63488 00:14:19.483 } 00:14:19.483 ] 00:14:19.483 }' 00:14:19.483 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.483 10:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.051 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:20.051 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:20.051 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.051 [2024-07-15 10:21:44.727618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.051 [2024-07-15 10:21:44.727661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.051 [2024-07-15 10:21:44.727674] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26db4d0 00:14:20.051 [2024-07-15 10:21:44.727682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.051 [2024-07-15 10:21:44.727960] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.051 [2024-07-15 10:21:44.727974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.051 [2024-07-15 10:21:44.728021] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:20.051 [2024-07-15 10:21:44.728034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:20.051 pt2 00:14:20.051 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:20.051 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:20.051 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:20.310 [2024-07-15 10:21:44.896050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:20.310 [2024-07-15 10:21:44.896071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.310 [2024-07-15 10:21:44.896082] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26db9e0 00:14:20.310 [2024-07-15 10:21:44.896089] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.310 [2024-07-15 10:21:44.896284] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.310 [2024-07-15 10:21:44.896297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:20.310 [2024-07-15 10:21:44.896328] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:20.311 [2024-07-15 10:21:44.896340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:20.311 [2024-07-15 10:21:44.896409] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26daf40 00:14:20.311 [2024-07-15 10:21:44.896418] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.311 [2024-07-15 10:21:44.896525] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x287dc60 00:14:20.311 [2024-07-15 10:21:44.896611] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26daf40 00:14:20.311 [2024-07-15 10:21:44.896618] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26daf40 00:14:20.311 [2024-07-15 10:21:44.896678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.311 pt3 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.311 10:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.311 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.311 "name": "raid_bdev1", 00:14:20.311 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:20.311 "strip_size_kb": 0, 00:14:20.311 "state": "online", 00:14:20.311 "raid_level": "raid1", 00:14:20.311 "superblock": true, 00:14:20.311 "num_base_bdevs": 3, 00:14:20.311 "num_base_bdevs_discovered": 3, 00:14:20.311 "num_base_bdevs_operational": 3, 00:14:20.311 "base_bdevs_list": [ 00:14:20.311 { 00:14:20.311 "name": "pt1", 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:20.311 "is_configured": true, 00:14:20.311 "data_offset": 2048, 00:14:20.311 "data_size": 63488 00:14:20.311 }, 00:14:20.311 { 00:14:20.311 "name": "pt2", 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:20.311 "is_configured": true, 00:14:20.311 "data_offset": 2048, 00:14:20.311 "data_size": 63488 00:14:20.311 }, 00:14:20.311 { 00:14:20.311 "name": "pt3", 00:14:20.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:20.311 "is_configured": true, 00:14:20.311 "data_offset": 2048, 00:14:20.311 "data_size": 63488 00:14:20.311 } 00:14:20.311 ] 00:14:20.311 }' 00:14:20.311 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.311 10:21:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:20.879 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:21.178 [2024-07-15 10:21:45.702314] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.178 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:21.178 "name": "raid_bdev1", 00:14:21.178 "aliases": [ 00:14:21.178 "4df85607-55e6-42ba-b665-cefc9827e3e8" 00:14:21.178 ], 00:14:21.178 "product_name": "Raid Volume", 00:14:21.178 "block_size": 512, 00:14:21.178 "num_blocks": 63488, 00:14:21.178 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:21.178 "assigned_rate_limits": { 00:14:21.178 "rw_ios_per_sec": 0, 00:14:21.178 "rw_mbytes_per_sec": 0, 00:14:21.178 "r_mbytes_per_sec": 0, 00:14:21.178 "w_mbytes_per_sec": 0 00:14:21.178 }, 00:14:21.178 "claimed": false, 00:14:21.178 "zoned": false, 00:14:21.178 "supported_io_types": { 00:14:21.178 "read": true, 00:14:21.178 "write": true, 00:14:21.178 "unmap": false, 00:14:21.178 "flush": false, 00:14:21.178 "reset": true, 00:14:21.178 "nvme_admin": false, 00:14:21.178 "nvme_io": false, 00:14:21.178 "nvme_io_md": false, 00:14:21.178 "write_zeroes": true, 00:14:21.178 "zcopy": false, 00:14:21.178 "get_zone_info": false, 00:14:21.178 "zone_management": false, 00:14:21.178 "zone_append": false, 00:14:21.178 "compare": false, 00:14:21.178 "compare_and_write": false, 00:14:21.178 "abort": false, 00:14:21.178 "seek_hole": false, 00:14:21.178 "seek_data": false, 00:14:21.178 "copy": false, 00:14:21.178 "nvme_iov_md": false 00:14:21.178 }, 00:14:21.178 "memory_domains": [ 00:14:21.178 { 00:14:21.178 "dma_device_id": "system", 00:14:21.178 "dma_device_type": 1 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.179 "dma_device_type": 2 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "dma_device_id": "system", 00:14:21.179 "dma_device_type": 1 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.179 "dma_device_type": 2 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "dma_device_id": "system", 00:14:21.179 "dma_device_type": 1 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.179 "dma_device_type": 2 00:14:21.179 } 00:14:21.179 ], 00:14:21.179 "driver_specific": { 00:14:21.179 "raid": { 00:14:21.179 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:21.179 "strip_size_kb": 0, 00:14:21.179 "state": "online", 00:14:21.179 "raid_level": "raid1", 00:14:21.179 "superblock": true, 00:14:21.179 "num_base_bdevs": 3, 00:14:21.179 "num_base_bdevs_discovered": 3, 00:14:21.179 "num_base_bdevs_operational": 3, 00:14:21.179 "base_bdevs_list": [ 00:14:21.179 { 00:14:21.179 "name": "pt1", 00:14:21.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.179 "is_configured": true, 00:14:21.179 "data_offset": 2048, 00:14:21.179 "data_size": 63488 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "name": "pt2", 00:14:21.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.179 "is_configured": true, 00:14:21.179 "data_offset": 2048, 00:14:21.179 "data_size": 63488 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "name": "pt3", 00:14:21.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:21.179 "is_configured": true, 00:14:21.179 "data_offset": 2048, 00:14:21.179 "data_size": 63488 00:14:21.179 } 00:14:21.179 ] 00:14:21.179 } 00:14:21.179 } 00:14:21.179 }' 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:21.179 pt2 00:14:21.179 pt3' 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.179 "name": "pt1", 00:14:21.179 "aliases": [ 00:14:21.179 "00000000-0000-0000-0000-000000000001" 00:14:21.179 ], 00:14:21.179 "product_name": "passthru", 00:14:21.179 "block_size": 512, 00:14:21.179 "num_blocks": 65536, 00:14:21.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:21.179 "assigned_rate_limits": { 00:14:21.179 "rw_ios_per_sec": 0, 00:14:21.179 "rw_mbytes_per_sec": 0, 00:14:21.179 "r_mbytes_per_sec": 0, 00:14:21.179 "w_mbytes_per_sec": 0 00:14:21.179 }, 00:14:21.179 "claimed": true, 00:14:21.179 "claim_type": "exclusive_write", 00:14:21.179 "zoned": false, 00:14:21.179 "supported_io_types": { 00:14:21.179 "read": true, 00:14:21.179 "write": true, 00:14:21.179 "unmap": true, 00:14:21.179 "flush": true, 00:14:21.179 "reset": true, 00:14:21.179 "nvme_admin": false, 00:14:21.179 "nvme_io": false, 00:14:21.179 "nvme_io_md": false, 00:14:21.179 "write_zeroes": true, 00:14:21.179 "zcopy": true, 00:14:21.179 "get_zone_info": false, 00:14:21.179 "zone_management": false, 00:14:21.179 "zone_append": false, 00:14:21.179 "compare": false, 00:14:21.179 "compare_and_write": false, 00:14:21.179 "abort": true, 00:14:21.179 "seek_hole": false, 00:14:21.179 "seek_data": false, 00:14:21.179 "copy": true, 00:14:21.179 "nvme_iov_md": false 00:14:21.179 }, 00:14:21.179 "memory_domains": [ 00:14:21.179 { 00:14:21.179 "dma_device_id": "system", 00:14:21.179 "dma_device_type": 1 00:14:21.179 }, 00:14:21.179 { 00:14:21.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.179 "dma_device_type": 2 00:14:21.179 } 00:14:21.179 ], 00:14:21.179 "driver_specific": { 00:14:21.179 "passthru": { 00:14:21.179 "name": "pt1", 00:14:21.179 "base_bdev_name": "malloc1" 00:14:21.179 } 00:14:21.179 } 00:14:21.179 }' 00:14:21.179 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.438 10:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.438 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:21.697 "name": "pt2", 00:14:21.697 "aliases": [ 00:14:21.697 "00000000-0000-0000-0000-000000000002" 00:14:21.697 ], 00:14:21.697 "product_name": "passthru", 00:14:21.697 "block_size": 512, 00:14:21.697 "num_blocks": 65536, 00:14:21.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:21.697 "assigned_rate_limits": { 00:14:21.697 "rw_ios_per_sec": 0, 00:14:21.697 "rw_mbytes_per_sec": 0, 00:14:21.697 "r_mbytes_per_sec": 0, 00:14:21.697 "w_mbytes_per_sec": 0 00:14:21.697 }, 00:14:21.697 "claimed": true, 00:14:21.697 "claim_type": "exclusive_write", 00:14:21.697 "zoned": false, 00:14:21.697 "supported_io_types": { 00:14:21.697 "read": true, 00:14:21.697 "write": true, 00:14:21.697 "unmap": true, 00:14:21.697 "flush": true, 00:14:21.697 "reset": true, 00:14:21.697 "nvme_admin": false, 00:14:21.697 "nvme_io": false, 00:14:21.697 "nvme_io_md": false, 00:14:21.697 "write_zeroes": true, 00:14:21.697 "zcopy": true, 00:14:21.697 "get_zone_info": false, 00:14:21.697 "zone_management": false, 00:14:21.697 "zone_append": false, 00:14:21.697 "compare": false, 00:14:21.697 "compare_and_write": false, 00:14:21.697 "abort": true, 00:14:21.697 "seek_hole": false, 00:14:21.697 "seek_data": false, 00:14:21.697 "copy": true, 00:14:21.697 "nvme_iov_md": false 00:14:21.697 }, 00:14:21.697 "memory_domains": [ 00:14:21.697 { 00:14:21.697 "dma_device_id": "system", 00:14:21.697 "dma_device_type": 1 00:14:21.697 }, 00:14:21.697 { 00:14:21.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.697 "dma_device_type": 2 00:14:21.697 } 00:14:21.697 ], 00:14:21.697 "driver_specific": { 00:14:21.697 "passthru": { 00:14:21.697 "name": "pt2", 00:14:21.697 "base_bdev_name": "malloc2" 00:14:21.697 } 00:14:21.697 } 00:14:21.697 }' 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.697 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:21.955 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:22.214 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:22.214 "name": "pt3", 00:14:22.214 "aliases": [ 00:14:22.214 "00000000-0000-0000-0000-000000000003" 00:14:22.214 ], 00:14:22.214 "product_name": "passthru", 00:14:22.214 "block_size": 512, 00:14:22.214 "num_blocks": 65536, 00:14:22.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.214 "assigned_rate_limits": { 00:14:22.214 "rw_ios_per_sec": 0, 00:14:22.214 "rw_mbytes_per_sec": 0, 00:14:22.214 "r_mbytes_per_sec": 0, 00:14:22.214 "w_mbytes_per_sec": 0 00:14:22.214 }, 00:14:22.214 "claimed": true, 00:14:22.214 "claim_type": "exclusive_write", 00:14:22.214 "zoned": false, 00:14:22.214 "supported_io_types": { 00:14:22.214 "read": true, 00:14:22.214 "write": true, 00:14:22.214 "unmap": true, 00:14:22.214 "flush": true, 00:14:22.214 "reset": true, 00:14:22.214 "nvme_admin": false, 00:14:22.214 "nvme_io": false, 00:14:22.214 "nvme_io_md": false, 00:14:22.214 "write_zeroes": true, 00:14:22.214 "zcopy": true, 00:14:22.214 "get_zone_info": false, 00:14:22.214 "zone_management": false, 00:14:22.214 "zone_append": false, 00:14:22.214 "compare": false, 00:14:22.214 "compare_and_write": false, 00:14:22.214 "abort": true, 00:14:22.214 "seek_hole": false, 00:14:22.214 "seek_data": false, 00:14:22.214 "copy": true, 00:14:22.214 "nvme_iov_md": false 00:14:22.214 }, 00:14:22.214 "memory_domains": [ 00:14:22.214 { 00:14:22.214 "dma_device_id": "system", 00:14:22.214 "dma_device_type": 1 00:14:22.214 }, 00:14:22.214 { 00:14:22.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.214 "dma_device_type": 2 00:14:22.214 } 00:14:22.214 ], 00:14:22.214 "driver_specific": { 00:14:22.214 "passthru": { 00:14:22.214 "name": "pt3", 00:14:22.214 "base_bdev_name": "malloc3" 00:14:22.214 } 00:14:22.214 } 00:14:22.214 }' 00:14:22.214 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.214 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.214 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:22.214 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.214 10:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:22.472 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:22.731 [2024-07-15 10:21:47.342543] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.731 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 4df85607-55e6-42ba-b665-cefc9827e3e8 '!=' 4df85607-55e6-42ba-b665-cefc9827e3e8 ']' 00:14:22.731 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:14:22.731 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:22.731 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:22.731 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:22.731 [2024-07-15 10:21:47.518832] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.990 "name": "raid_bdev1", 00:14:22.990 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:22.990 "strip_size_kb": 0, 00:14:22.990 "state": "online", 00:14:22.990 "raid_level": "raid1", 00:14:22.990 "superblock": true, 00:14:22.990 "num_base_bdevs": 3, 00:14:22.990 "num_base_bdevs_discovered": 2, 00:14:22.990 "num_base_bdevs_operational": 2, 00:14:22.990 "base_bdevs_list": [ 00:14:22.990 { 00:14:22.990 "name": null, 00:14:22.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.990 "is_configured": false, 00:14:22.990 "data_offset": 2048, 00:14:22.990 "data_size": 63488 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "name": "pt2", 00:14:22.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.990 "is_configured": true, 00:14:22.990 "data_offset": 2048, 00:14:22.990 "data_size": 63488 00:14:22.990 }, 00:14:22.990 { 00:14:22.990 "name": "pt3", 00:14:22.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.990 "is_configured": true, 00:14:22.990 "data_offset": 2048, 00:14:22.990 "data_size": 63488 00:14:22.990 } 00:14:22.990 ] 00:14:22.990 }' 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.990 10:21:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.558 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:23.558 [2024-07-15 10:21:48.340973] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.558 [2024-07-15 10:21:48.340994] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.558 [2024-07-15 10:21:48.341030] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.558 [2024-07-15 10:21:48.341067] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.558 [2024-07-15 10:21:48.341075] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26daf40 name raid_bdev1, state offline 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:23.817 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:24.075 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:24.075 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:24.075 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:24.335 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:14:24.335 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:14:24.335 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:14:24.335 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:24.335 10:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.335 [2024-07-15 10:21:49.022705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.335 [2024-07-15 10:21:49.022744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.335 [2024-07-15 10:21:49.022762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28873f0 00:14:24.335 [2024-07-15 10:21:49.022773] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.335 [2024-07-15 10:21:49.024043] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.335 [2024-07-15 10:21:49.024069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.335 [2024-07-15 10:21:49.024132] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:24.335 [2024-07-15 10:21:49.024158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.335 pt2 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.335 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.594 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.594 "name": "raid_bdev1", 00:14:24.594 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:24.594 "strip_size_kb": 0, 00:14:24.594 "state": "configuring", 00:14:24.594 "raid_level": "raid1", 00:14:24.594 "superblock": true, 00:14:24.594 "num_base_bdevs": 3, 00:14:24.594 "num_base_bdevs_discovered": 1, 00:14:24.594 "num_base_bdevs_operational": 2, 00:14:24.594 "base_bdevs_list": [ 00:14:24.594 { 00:14:24.595 "name": null, 00:14:24.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.595 "is_configured": false, 00:14:24.595 "data_offset": 2048, 00:14:24.595 "data_size": 63488 00:14:24.595 }, 00:14:24.595 { 00:14:24.595 "name": "pt2", 00:14:24.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.595 "is_configured": true, 00:14:24.595 "data_offset": 2048, 00:14:24.595 "data_size": 63488 00:14:24.595 }, 00:14:24.595 { 00:14:24.595 "name": null, 00:14:24.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.595 "is_configured": false, 00:14:24.595 "data_offset": 2048, 00:14:24.595 "data_size": 63488 00:14:24.595 } 00:14:24.595 ] 00:14:24.595 }' 00:14:24.595 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.595 10:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:25.163 [2024-07-15 10:21:49.800700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:25.163 [2024-07-15 10:21:49.800741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.163 [2024-07-15 10:21:49.800761] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x287c370 00:14:25.163 [2024-07-15 10:21:49.800772] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.163 [2024-07-15 10:21:49.801056] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.163 [2024-07-15 10:21:49.801074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:25.163 [2024-07-15 10:21:49.801130] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:25.163 [2024-07-15 10:21:49.801148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:25.163 [2024-07-15 10:21:49.801219] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26dd430 00:14:25.163 [2024-07-15 10:21:49.801225] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.163 [2024-07-15 10:21:49.801338] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28969c0 00:14:25.163 [2024-07-15 10:21:49.801421] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26dd430 00:14:25.163 [2024-07-15 10:21:49.801427] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26dd430 00:14:25.163 [2024-07-15 10:21:49.801486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.163 pt3 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:25.163 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.164 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.423 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.423 "name": "raid_bdev1", 00:14:25.423 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:25.423 "strip_size_kb": 0, 00:14:25.423 "state": "online", 00:14:25.423 "raid_level": "raid1", 00:14:25.423 "superblock": true, 00:14:25.423 "num_base_bdevs": 3, 00:14:25.423 "num_base_bdevs_discovered": 2, 00:14:25.423 "num_base_bdevs_operational": 2, 00:14:25.423 "base_bdevs_list": [ 00:14:25.423 { 00:14:25.423 "name": null, 00:14:25.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.423 "is_configured": false, 00:14:25.423 "data_offset": 2048, 00:14:25.423 "data_size": 63488 00:14:25.423 }, 00:14:25.423 { 00:14:25.423 "name": "pt2", 00:14:25.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.423 "is_configured": true, 00:14:25.423 "data_offset": 2048, 00:14:25.423 "data_size": 63488 00:14:25.423 }, 00:14:25.423 { 00:14:25.423 "name": "pt3", 00:14:25.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.423 "is_configured": true, 00:14:25.423 "data_offset": 2048, 00:14:25.423 "data_size": 63488 00:14:25.423 } 00:14:25.423 ] 00:14:25.423 }' 00:14:25.423 10:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.423 10:21:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.991 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.991 [2024-07-15 10:21:50.638853] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.991 [2024-07-15 10:21:50.638876] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.991 [2024-07-15 10:21:50.638925] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.991 [2024-07-15 10:21:50.638965] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.991 [2024-07-15 10:21:50.638973] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26dd430 name raid_bdev1, state offline 00:14:25.991 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.991 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:14:26.249 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:14:26.249 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:14:26.249 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:14:26.249 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:14:26.249 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:26.249 10:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:26.508 [2024-07-15 10:21:51.136133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:26.508 [2024-07-15 10:21:51.136174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.508 [2024-07-15 10:21:51.136195] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x287c370 00:14:26.508 [2024-07-15 10:21:51.136206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.508 [2024-07-15 10:21:51.137435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.508 [2024-07-15 10:21:51.137462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:26.508 [2024-07-15 10:21:51.137527] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:26.508 [2024-07-15 10:21:51.137553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.508 [2024-07-15 10:21:51.137636] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:26.508 [2024-07-15 10:21:51.137645] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:26.508 [2024-07-15 10:21:51.137654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x287cde0 name raid_bdev1, state configuring 00:14:26.508 [2024-07-15 10:21:51.137672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.508 pt1 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.508 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.767 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.767 "name": "raid_bdev1", 00:14:26.767 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:26.767 "strip_size_kb": 0, 00:14:26.767 "state": "configuring", 00:14:26.767 "raid_level": "raid1", 00:14:26.767 "superblock": true, 00:14:26.767 "num_base_bdevs": 3, 00:14:26.767 "num_base_bdevs_discovered": 1, 00:14:26.767 "num_base_bdevs_operational": 2, 00:14:26.767 "base_bdevs_list": [ 00:14:26.767 { 00:14:26.767 "name": null, 00:14:26.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.767 "is_configured": false, 00:14:26.767 "data_offset": 2048, 00:14:26.767 "data_size": 63488 00:14:26.767 }, 00:14:26.767 { 00:14:26.767 "name": "pt2", 00:14:26.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.767 "is_configured": true, 00:14:26.767 "data_offset": 2048, 00:14:26.767 "data_size": 63488 00:14:26.767 }, 00:14:26.767 { 00:14:26.767 "name": null, 00:14:26.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.767 "is_configured": false, 00:14:26.767 "data_offset": 2048, 00:14:26.767 "data_size": 63488 00:14:26.767 } 00:14:26.767 ] 00:14:26.767 }' 00:14:26.767 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.767 10:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.026 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:14:27.026 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:27.285 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:14:27.285 10:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:27.545 [2024-07-15 10:21:52.134728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:27.545 [2024-07-15 10:21:52.134776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.545 [2024-07-15 10:21:52.134791] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26df290 00:14:27.545 [2024-07-15 10:21:52.134801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.545 [2024-07-15 10:21:52.135075] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.545 [2024-07-15 10:21:52.135090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:27.545 [2024-07-15 10:21:52.135141] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:27.545 [2024-07-15 10:21:52.135155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:27.545 [2024-07-15 10:21:52.135228] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x287d6d0 00:14:27.545 [2024-07-15 10:21:52.135235] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.545 [2024-07-15 10:21:52.135350] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26e4f40 00:14:27.545 [2024-07-15 10:21:52.135435] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x287d6d0 00:14:27.545 [2024-07-15 10:21:52.135441] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x287d6d0 00:14:27.546 [2024-07-15 10:21:52.135507] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.546 pt3 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.546 "name": "raid_bdev1", 00:14:27.546 "uuid": "4df85607-55e6-42ba-b665-cefc9827e3e8", 00:14:27.546 "strip_size_kb": 0, 00:14:27.546 "state": "online", 00:14:27.546 "raid_level": "raid1", 00:14:27.546 "superblock": true, 00:14:27.546 "num_base_bdevs": 3, 00:14:27.546 "num_base_bdevs_discovered": 2, 00:14:27.546 "num_base_bdevs_operational": 2, 00:14:27.546 "base_bdevs_list": [ 00:14:27.546 { 00:14:27.546 "name": null, 00:14:27.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.546 "is_configured": false, 00:14:27.546 "data_offset": 2048, 00:14:27.546 "data_size": 63488 00:14:27.546 }, 00:14:27.546 { 00:14:27.546 "name": "pt2", 00:14:27.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.546 "is_configured": true, 00:14:27.546 "data_offset": 2048, 00:14:27.546 "data_size": 63488 00:14:27.546 }, 00:14:27.546 { 00:14:27.546 "name": "pt3", 00:14:27.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.546 "is_configured": true, 00:14:27.546 "data_offset": 2048, 00:14:27.546 "data_size": 63488 00:14:27.546 } 00:14:27.546 ] 00:14:27.546 }' 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.546 10:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.114 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:28.114 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:28.372 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:14:28.372 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:28.372 10:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:14:28.372 [2024-07-15 10:21:53.117520] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4df85607-55e6-42ba-b665-cefc9827e3e8 '!=' 4df85607-55e6-42ba-b665-cefc9827e3e8 ']' 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1798577 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1798577 ']' 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1798577 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.372 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1798577 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1798577' 00:14:28.641 killing process with pid 1798577 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1798577 00:14:28.641 [2024-07-15 10:21:53.202399] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.641 [2024-07-15 10:21:53.202448] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.641 [2024-07-15 10:21:53.202484] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.641 [2024-07-15 10:21:53.202492] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x287d6d0 name raid_bdev1, state offline 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1798577 00:14:28.641 [2024-07-15 10:21:53.225111] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:28.641 00:14:28.641 real 0m16.473s 00:14:28.641 user 0m29.864s 00:14:28.641 sys 0m3.162s 00:14:28.641 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.642 10:21:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.642 ************************************ 00:14:28.642 END TEST raid_superblock_test 00:14:28.642 ************************************ 00:14:28.905 10:21:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:28.905 10:21:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:28.905 10:21:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:28.905 10:21:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.905 10:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.905 ************************************ 00:14:28.905 START TEST raid_read_error_test 00:14:28.905 ************************************ 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:28.905 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qbrGfcvSlc 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1801709 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1801709 /var/tmp/spdk-raid.sock 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1801709 ']' 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:28.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.906 10:21:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.906 [2024-07-15 10:21:53.548267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:28.906 [2024-07-15 10:21:53.548311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801709 ] 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:28.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:28.906 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:28.906 [2024-07-15 10:21:53.639112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.165 [2024-07-15 10:21:53.714924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.165 [2024-07-15 10:21:53.776677] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.165 [2024-07-15 10:21:53.776698] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.733 10:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.733 10:21:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:29.733 10:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:29.733 10:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:29.733 BaseBdev1_malloc 00:14:29.992 10:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:29.992 true 00:14:29.992 10:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:30.252 [2024-07-15 10:21:54.853127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:30.252 [2024-07-15 10:21:54.853160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.252 [2024-07-15 10:21:54.853175] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2131190 00:14:30.252 [2024-07-15 10:21:54.853183] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.252 [2024-07-15 10:21:54.854384] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.252 [2024-07-15 10:21:54.854407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.252 BaseBdev1 00:14:30.252 10:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:30.252 10:21:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.252 BaseBdev2_malloc 00:14:30.252 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:30.511 true 00:14:30.511 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:30.770 [2024-07-15 10:21:55.362083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:30.770 [2024-07-15 10:21:55.362112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.770 [2024-07-15 10:21:55.362125] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2135e20 00:14:30.770 [2024-07-15 10:21:55.362133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.770 [2024-07-15 10:21:55.363046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.770 [2024-07-15 10:21:55.363068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.770 BaseBdev2 00:14:30.770 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:30.770 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:30.770 BaseBdev3_malloc 00:14:31.029 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:31.029 true 00:14:31.029 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:31.288 [2024-07-15 10:21:55.895193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:31.288 [2024-07-15 10:21:55.895225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.288 [2024-07-15 10:21:55.895240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2136d90 00:14:31.288 [2024-07-15 10:21:55.895248] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.288 [2024-07-15 10:21:55.896219] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.288 [2024-07-15 10:21:55.896240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:31.288 BaseBdev3 00:14:31.288 10:21:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:31.547 [2024-07-15 10:21:56.083703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.547 [2024-07-15 10:21:56.084651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:31.547 [2024-07-15 10:21:56.084698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.547 [2024-07-15 10:21:56.084838] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2138ba0 00:14:31.547 [2024-07-15 10:21:56.084846] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:31.547 [2024-07-15 10:21:56.085003] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2138820 00:14:31.547 [2024-07-15 10:21:56.085110] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2138ba0 00:14:31.547 [2024-07-15 10:21:56.085117] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2138ba0 00:14:31.547 [2024-07-15 10:21:56.085187] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.547 "name": "raid_bdev1", 00:14:31.547 "uuid": "c4696ebc-bbe8-49d1-b7bd-daf96763fe47", 00:14:31.547 "strip_size_kb": 0, 00:14:31.547 "state": "online", 00:14:31.547 "raid_level": "raid1", 00:14:31.547 "superblock": true, 00:14:31.547 "num_base_bdevs": 3, 00:14:31.547 "num_base_bdevs_discovered": 3, 00:14:31.547 "num_base_bdevs_operational": 3, 00:14:31.547 "base_bdevs_list": [ 00:14:31.547 { 00:14:31.547 "name": "BaseBdev1", 00:14:31.547 "uuid": "d39a080f-394e-5e22-a5cd-e3758909e02d", 00:14:31.547 "is_configured": true, 00:14:31.547 "data_offset": 2048, 00:14:31.547 "data_size": 63488 00:14:31.547 }, 00:14:31.547 { 00:14:31.547 "name": "BaseBdev2", 00:14:31.547 "uuid": "fd08d98b-8685-5bbf-b0dd-7863e529114c", 00:14:31.547 "is_configured": true, 00:14:31.547 "data_offset": 2048, 00:14:31.547 "data_size": 63488 00:14:31.547 }, 00:14:31.547 { 00:14:31.547 "name": "BaseBdev3", 00:14:31.547 "uuid": "16c78438-23ed-59a7-981d-2230418dc215", 00:14:31.547 "is_configured": true, 00:14:31.547 "data_offset": 2048, 00:14:31.547 "data_size": 63488 00:14:31.547 } 00:14:31.547 ] 00:14:31.547 }' 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.547 10:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.116 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:32.116 10:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:32.116 [2024-07-15 10:21:56.825813] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x213d690 00:14:33.055 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.315 10:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.315 10:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.315 "name": "raid_bdev1", 00:14:33.315 "uuid": "c4696ebc-bbe8-49d1-b7bd-daf96763fe47", 00:14:33.315 "strip_size_kb": 0, 00:14:33.315 "state": "online", 00:14:33.315 "raid_level": "raid1", 00:14:33.315 "superblock": true, 00:14:33.315 "num_base_bdevs": 3, 00:14:33.315 "num_base_bdevs_discovered": 3, 00:14:33.315 "num_base_bdevs_operational": 3, 00:14:33.315 "base_bdevs_list": [ 00:14:33.315 { 00:14:33.315 "name": "BaseBdev1", 00:14:33.315 "uuid": "d39a080f-394e-5e22-a5cd-e3758909e02d", 00:14:33.315 "is_configured": true, 00:14:33.315 "data_offset": 2048, 00:14:33.315 "data_size": 63488 00:14:33.315 }, 00:14:33.315 { 00:14:33.315 "name": "BaseBdev2", 00:14:33.315 "uuid": "fd08d98b-8685-5bbf-b0dd-7863e529114c", 00:14:33.315 "is_configured": true, 00:14:33.315 "data_offset": 2048, 00:14:33.315 "data_size": 63488 00:14:33.315 }, 00:14:33.315 { 00:14:33.315 "name": "BaseBdev3", 00:14:33.315 "uuid": "16c78438-23ed-59a7-981d-2230418dc215", 00:14:33.315 "is_configured": true, 00:14:33.315 "data_offset": 2048, 00:14:33.315 "data_size": 63488 00:14:33.315 } 00:14:33.315 ] 00:14:33.315 }' 00:14:33.315 10:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.315 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.884 10:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:34.142 [2024-07-15 10:21:58.750401] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.142 [2024-07-15 10:21:58.750438] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.142 [2024-07-15 10:21:58.752476] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.142 [2024-07-15 10:21:58.752499] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.142 [2024-07-15 10:21:58.752560] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.142 [2024-07-15 10:21:58.752568] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2138ba0 name raid_bdev1, state offline 00:14:34.142 0 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1801709 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1801709 ']' 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1801709 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1801709 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1801709' 00:14:34.142 killing process with pid 1801709 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1801709 00:14:34.142 [2024-07-15 10:21:58.822215] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.142 10:21:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1801709 00:14:34.142 [2024-07-15 10:21:58.840289] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qbrGfcvSlc 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:34.401 00:14:34.401 real 0m5.548s 00:14:34.401 user 0m8.434s 00:14:34.401 sys 0m1.032s 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.401 10:21:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.401 ************************************ 00:14:34.401 END TEST raid_read_error_test 00:14:34.401 ************************************ 00:14:34.401 10:21:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:34.401 10:21:59 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:34.401 10:21:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:34.401 10:21:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.401 10:21:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.401 ************************************ 00:14:34.401 START TEST raid_write_error_test 00:14:34.401 ************************************ 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:34.401 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.uXtNTjnmRc 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1802847 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1802847 /var/tmp/spdk-raid.sock 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1802847 ']' 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.402 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.402 [2024-07-15 10:21:59.178177] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:34.402 [2024-07-15 10:21:59.178220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802847 ] 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:34.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:34.661 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:34.661 [2024-07-15 10:21:59.268759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.661 [2024-07-15 10:21:59.342817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.661 [2024-07-15 10:21:59.391154] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.661 [2024-07-15 10:21:59.391181] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.229 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.229 10:21:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:35.229 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:35.229 10:21:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:35.514 BaseBdev1_malloc 00:14:35.515 10:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:35.772 true 00:14:35.772 10:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:35.772 [2024-07-15 10:22:00.486963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:35.772 [2024-07-15 10:22:00.486995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.772 [2024-07-15 10:22:00.487009] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcef190 00:14:35.772 [2024-07-15 10:22:00.487017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.772 [2024-07-15 10:22:00.488200] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.772 [2024-07-15 10:22:00.488222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:35.772 BaseBdev1 00:14:35.772 10:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:35.772 10:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.030 BaseBdev2_malloc 00:14:36.030 10:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:36.288 true 00:14:36.288 10:22:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:36.288 [2024-07-15 10:22:00.995941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:36.288 [2024-07-15 10:22:00.995969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.288 [2024-07-15 10:22:00.995982] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf3e20 00:14:36.288 [2024-07-15 10:22:00.995990] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.288 [2024-07-15 10:22:00.996971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.288 [2024-07-15 10:22:00.996991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.288 BaseBdev2 00:14:36.288 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:36.288 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.546 BaseBdev3_malloc 00:14:36.546 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:36.804 true 00:14:36.804 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:36.804 [2024-07-15 10:22:01.516778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:36.804 [2024-07-15 10:22:01.516809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.804 [2024-07-15 10:22:01.516824] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf4d90 00:14:36.804 [2024-07-15 10:22:01.516832] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.804 [2024-07-15 10:22:01.517877] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.804 [2024-07-15 10:22:01.517898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.804 BaseBdev3 00:14:36.804 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:37.064 [2024-07-15 10:22:01.673200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.064 [2024-07-15 10:22:01.674001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.064 [2024-07-15 10:22:01.674046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.064 [2024-07-15 10:22:01.674184] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcf6ba0 00:14:37.064 [2024-07-15 10:22:01.674191] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.064 [2024-07-15 10:22:01.674309] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcf6820 00:14:37.064 [2024-07-15 10:22:01.674406] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcf6ba0 00:14:37.064 [2024-07-15 10:22:01.674412] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xcf6ba0 00:14:37.064 [2024-07-15 10:22:01.674475] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.064 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.323 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.323 "name": "raid_bdev1", 00:14:37.323 "uuid": "266228a3-1e2d-43ad-8ced-78774b9cc535", 00:14:37.323 "strip_size_kb": 0, 00:14:37.323 "state": "online", 00:14:37.323 "raid_level": "raid1", 00:14:37.323 "superblock": true, 00:14:37.323 "num_base_bdevs": 3, 00:14:37.323 "num_base_bdevs_discovered": 3, 00:14:37.323 "num_base_bdevs_operational": 3, 00:14:37.323 "base_bdevs_list": [ 00:14:37.323 { 00:14:37.323 "name": "BaseBdev1", 00:14:37.323 "uuid": "06d9ba43-928d-5039-aea6-11c1bb6c7c0a", 00:14:37.323 "is_configured": true, 00:14:37.323 "data_offset": 2048, 00:14:37.323 "data_size": 63488 00:14:37.323 }, 00:14:37.323 { 00:14:37.323 "name": "BaseBdev2", 00:14:37.323 "uuid": "d4269eb2-4d3e-5f78-841d-bdb34a809793", 00:14:37.323 "is_configured": true, 00:14:37.323 "data_offset": 2048, 00:14:37.323 "data_size": 63488 00:14:37.323 }, 00:14:37.323 { 00:14:37.323 "name": "BaseBdev3", 00:14:37.323 "uuid": "85842e07-5be4-5820-ad1b-14bf8005e389", 00:14:37.323 "is_configured": true, 00:14:37.323 "data_offset": 2048, 00:14:37.323 "data_size": 63488 00:14:37.323 } 00:14:37.323 ] 00:14:37.323 }' 00:14:37.323 10:22:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.323 10:22:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.581 10:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:37.581 10:22:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:37.840 [2024-07-15 10:22:02.399264] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcfb690 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:38.778 [2024-07-15 10:22:03.474801] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:38.778 [2024-07-15 10:22:03.474849] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.778 [2024-07-15 10:22:03.475024] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xcfb690 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.778 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.037 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.037 "name": "raid_bdev1", 00:14:39.037 "uuid": "266228a3-1e2d-43ad-8ced-78774b9cc535", 00:14:39.037 "strip_size_kb": 0, 00:14:39.037 "state": "online", 00:14:39.037 "raid_level": "raid1", 00:14:39.037 "superblock": true, 00:14:39.037 "num_base_bdevs": 3, 00:14:39.037 "num_base_bdevs_discovered": 2, 00:14:39.037 "num_base_bdevs_operational": 2, 00:14:39.037 "base_bdevs_list": [ 00:14:39.037 { 00:14:39.037 "name": null, 00:14:39.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.037 "is_configured": false, 00:14:39.037 "data_offset": 2048, 00:14:39.037 "data_size": 63488 00:14:39.037 }, 00:14:39.037 { 00:14:39.037 "name": "BaseBdev2", 00:14:39.037 "uuid": "d4269eb2-4d3e-5f78-841d-bdb34a809793", 00:14:39.037 "is_configured": true, 00:14:39.037 "data_offset": 2048, 00:14:39.037 "data_size": 63488 00:14:39.037 }, 00:14:39.037 { 00:14:39.037 "name": "BaseBdev3", 00:14:39.037 "uuid": "85842e07-5be4-5820-ad1b-14bf8005e389", 00:14:39.037 "is_configured": true, 00:14:39.037 "data_offset": 2048, 00:14:39.037 "data_size": 63488 00:14:39.037 } 00:14:39.037 ] 00:14:39.037 }' 00:14:39.037 10:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.037 10:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:39.606 [2024-07-15 10:22:04.301165] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.606 [2024-07-15 10:22:04.301194] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.606 [2024-07-15 10:22:04.303210] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.606 [2024-07-15 10:22:04.303231] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.606 [2024-07-15 10:22:04.303279] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.606 [2024-07-15 10:22:04.303286] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcf6ba0 name raid_bdev1, state offline 00:14:39.606 0 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1802847 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1802847 ']' 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1802847 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1802847 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1802847' 00:14:39.606 killing process with pid 1802847 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1802847 00:14:39.606 [2024-07-15 10:22:04.375565] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.606 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1802847 00:14:39.606 [2024-07-15 10:22:04.394151] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.uXtNTjnmRc 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:39.866 00:14:39.866 real 0m5.471s 00:14:39.866 user 0m8.335s 00:14:39.866 sys 0m0.978s 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.866 10:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.866 ************************************ 00:14:39.866 END TEST raid_write_error_test 00:14:39.866 ************************************ 00:14:39.866 10:22:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:39.866 10:22:04 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:39.866 10:22:04 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:39.866 10:22:04 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:39.866 10:22:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:39.866 10:22:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.866 10:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.126 ************************************ 00:14:40.126 START TEST raid_state_function_test 00:14:40.126 ************************************ 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1803881 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1803881' 00:14:40.126 Process raid pid: 1803881 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1803881 /var/tmp/spdk-raid.sock 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1803881 ']' 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.126 10:22:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.126 [2024-07-15 10:22:04.730000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:40.126 [2024-07-15 10:22:04.730047] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:40.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.126 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:40.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:40.127 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:40.127 [2024-07-15 10:22:04.823969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.127 [2024-07-15 10:22:04.897771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.387 [2024-07-15 10:22:04.948032] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.387 [2024-07-15 10:22:04.948055] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:40.955 [2024-07-15 10:22:05.675270] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.955 [2024-07-15 10:22:05.675304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.955 [2024-07-15 10:22:05.675311] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:40.955 [2024-07-15 10:22:05.675318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:40.955 [2024-07-15 10:22:05.675324] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:40.955 [2024-07-15 10:22:05.675330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:40.955 [2024-07-15 10:22:05.675336] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:40.955 [2024-07-15 10:22:05.675342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.955 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.213 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.213 "name": "Existed_Raid", 00:14:41.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.213 "strip_size_kb": 64, 00:14:41.213 "state": "configuring", 00:14:41.213 "raid_level": "raid0", 00:14:41.213 "superblock": false, 00:14:41.213 "num_base_bdevs": 4, 00:14:41.213 "num_base_bdevs_discovered": 0, 00:14:41.213 "num_base_bdevs_operational": 4, 00:14:41.213 "base_bdevs_list": [ 00:14:41.213 { 00:14:41.213 "name": "BaseBdev1", 00:14:41.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.213 "is_configured": false, 00:14:41.213 "data_offset": 0, 00:14:41.213 "data_size": 0 00:14:41.213 }, 00:14:41.213 { 00:14:41.213 "name": "BaseBdev2", 00:14:41.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.213 "is_configured": false, 00:14:41.213 "data_offset": 0, 00:14:41.213 "data_size": 0 00:14:41.213 }, 00:14:41.213 { 00:14:41.213 "name": "BaseBdev3", 00:14:41.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.213 "is_configured": false, 00:14:41.213 "data_offset": 0, 00:14:41.213 "data_size": 0 00:14:41.213 }, 00:14:41.213 { 00:14:41.213 "name": "BaseBdev4", 00:14:41.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.213 "is_configured": false, 00:14:41.213 "data_offset": 0, 00:14:41.213 "data_size": 0 00:14:41.213 } 00:14:41.213 ] 00:14:41.213 }' 00:14:41.213 10:22:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.213 10:22:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.782 10:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:41.782 [2024-07-15 10:22:06.533406] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.782 [2024-07-15 10:22:06.533434] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aa4f60 name Existed_Raid, state configuring 00:14:41.782 10:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:42.042 [2024-07-15 10:22:06.701846] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.042 [2024-07-15 10:22:06.701877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.042 [2024-07-15 10:22:06.701883] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.042 [2024-07-15 10:22:06.701890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.042 [2024-07-15 10:22:06.701895] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.042 [2024-07-15 10:22:06.701923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.042 [2024-07-15 10:22:06.701929] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.042 [2024-07-15 10:22:06.701936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.042 10:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.302 [2024-07-15 10:22:06.882709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.302 BaseBdev1 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:42.302 10:22:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.302 10:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.562 [ 00:14:42.562 { 00:14:42.562 "name": "BaseBdev1", 00:14:42.562 "aliases": [ 00:14:42.562 "131e35a8-115e-452b-9d11-fe8f9f6c4b0f" 00:14:42.562 ], 00:14:42.562 "product_name": "Malloc disk", 00:14:42.562 "block_size": 512, 00:14:42.562 "num_blocks": 65536, 00:14:42.562 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:42.562 "assigned_rate_limits": { 00:14:42.562 "rw_ios_per_sec": 0, 00:14:42.562 "rw_mbytes_per_sec": 0, 00:14:42.562 "r_mbytes_per_sec": 0, 00:14:42.562 "w_mbytes_per_sec": 0 00:14:42.562 }, 00:14:42.562 "claimed": true, 00:14:42.562 "claim_type": "exclusive_write", 00:14:42.562 "zoned": false, 00:14:42.562 "supported_io_types": { 00:14:42.562 "read": true, 00:14:42.562 "write": true, 00:14:42.562 "unmap": true, 00:14:42.562 "flush": true, 00:14:42.562 "reset": true, 00:14:42.562 "nvme_admin": false, 00:14:42.562 "nvme_io": false, 00:14:42.562 "nvme_io_md": false, 00:14:42.562 "write_zeroes": true, 00:14:42.562 "zcopy": true, 00:14:42.562 "get_zone_info": false, 00:14:42.562 "zone_management": false, 00:14:42.562 "zone_append": false, 00:14:42.562 "compare": false, 00:14:42.562 "compare_and_write": false, 00:14:42.562 "abort": true, 00:14:42.562 "seek_hole": false, 00:14:42.562 "seek_data": false, 00:14:42.562 "copy": true, 00:14:42.562 "nvme_iov_md": false 00:14:42.562 }, 00:14:42.562 "memory_domains": [ 00:14:42.562 { 00:14:42.562 "dma_device_id": "system", 00:14:42.562 "dma_device_type": 1 00:14:42.562 }, 00:14:42.562 { 00:14:42.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.562 "dma_device_type": 2 00:14:42.562 } 00:14:42.562 ], 00:14:42.562 "driver_specific": {} 00:14:42.562 } 00:14:42.562 ] 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.562 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.821 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.821 "name": "Existed_Raid", 00:14:42.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.821 "strip_size_kb": 64, 00:14:42.821 "state": "configuring", 00:14:42.821 "raid_level": "raid0", 00:14:42.821 "superblock": false, 00:14:42.821 "num_base_bdevs": 4, 00:14:42.821 "num_base_bdevs_discovered": 1, 00:14:42.821 "num_base_bdevs_operational": 4, 00:14:42.821 "base_bdevs_list": [ 00:14:42.821 { 00:14:42.821 "name": "BaseBdev1", 00:14:42.821 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:42.821 "is_configured": true, 00:14:42.821 "data_offset": 0, 00:14:42.821 "data_size": 65536 00:14:42.821 }, 00:14:42.821 { 00:14:42.821 "name": "BaseBdev2", 00:14:42.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.821 "is_configured": false, 00:14:42.821 "data_offset": 0, 00:14:42.821 "data_size": 0 00:14:42.821 }, 00:14:42.821 { 00:14:42.821 "name": "BaseBdev3", 00:14:42.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.821 "is_configured": false, 00:14:42.821 "data_offset": 0, 00:14:42.821 "data_size": 0 00:14:42.821 }, 00:14:42.821 { 00:14:42.821 "name": "BaseBdev4", 00:14:42.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.821 "is_configured": false, 00:14:42.821 "data_offset": 0, 00:14:42.821 "data_size": 0 00:14:42.821 } 00:14:42.821 ] 00:14:42.821 }' 00:14:42.821 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.821 10:22:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.389 10:22:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:43.389 [2024-07-15 10:22:08.033682] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.389 [2024-07-15 10:22:08.033719] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aa47d0 name Existed_Raid, state configuring 00:14:43.389 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:43.649 [2024-07-15 10:22:08.198125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.649 [2024-07-15 10:22:08.199199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.649 [2024-07-15 10:22:08.199228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.649 [2024-07-15 10:22:08.199235] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:43.649 [2024-07-15 10:22:08.199242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:43.649 [2024-07-15 10:22:08.199248] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:43.649 [2024-07-15 10:22:08.199255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.649 "name": "Existed_Raid", 00:14:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.649 "strip_size_kb": 64, 00:14:43.649 "state": "configuring", 00:14:43.649 "raid_level": "raid0", 00:14:43.649 "superblock": false, 00:14:43.649 "num_base_bdevs": 4, 00:14:43.649 "num_base_bdevs_discovered": 1, 00:14:43.649 "num_base_bdevs_operational": 4, 00:14:43.649 "base_bdevs_list": [ 00:14:43.649 { 00:14:43.649 "name": "BaseBdev1", 00:14:43.649 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:43.649 "is_configured": true, 00:14:43.649 "data_offset": 0, 00:14:43.649 "data_size": 65536 00:14:43.649 }, 00:14:43.649 { 00:14:43.649 "name": "BaseBdev2", 00:14:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.649 "is_configured": false, 00:14:43.649 "data_offset": 0, 00:14:43.649 "data_size": 0 00:14:43.649 }, 00:14:43.649 { 00:14:43.649 "name": "BaseBdev3", 00:14:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.649 "is_configured": false, 00:14:43.649 "data_offset": 0, 00:14:43.649 "data_size": 0 00:14:43.649 }, 00:14:43.649 { 00:14:43.649 "name": "BaseBdev4", 00:14:43.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.649 "is_configured": false, 00:14:43.649 "data_offset": 0, 00:14:43.649 "data_size": 0 00:14:43.649 } 00:14:43.649 ] 00:14:43.649 }' 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.649 10:22:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.216 10:22:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.476 [2024-07-15 10:22:09.026956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.476 BaseBdev2 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:44.476 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.735 [ 00:14:44.735 { 00:14:44.735 "name": "BaseBdev2", 00:14:44.735 "aliases": [ 00:14:44.735 "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28" 00:14:44.735 ], 00:14:44.735 "product_name": "Malloc disk", 00:14:44.735 "block_size": 512, 00:14:44.735 "num_blocks": 65536, 00:14:44.735 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:44.735 "assigned_rate_limits": { 00:14:44.735 "rw_ios_per_sec": 0, 00:14:44.735 "rw_mbytes_per_sec": 0, 00:14:44.735 "r_mbytes_per_sec": 0, 00:14:44.735 "w_mbytes_per_sec": 0 00:14:44.735 }, 00:14:44.735 "claimed": true, 00:14:44.735 "claim_type": "exclusive_write", 00:14:44.735 "zoned": false, 00:14:44.735 "supported_io_types": { 00:14:44.735 "read": true, 00:14:44.735 "write": true, 00:14:44.735 "unmap": true, 00:14:44.735 "flush": true, 00:14:44.735 "reset": true, 00:14:44.735 "nvme_admin": false, 00:14:44.735 "nvme_io": false, 00:14:44.735 "nvme_io_md": false, 00:14:44.735 "write_zeroes": true, 00:14:44.735 "zcopy": true, 00:14:44.735 "get_zone_info": false, 00:14:44.735 "zone_management": false, 00:14:44.735 "zone_append": false, 00:14:44.735 "compare": false, 00:14:44.735 "compare_and_write": false, 00:14:44.735 "abort": true, 00:14:44.735 "seek_hole": false, 00:14:44.735 "seek_data": false, 00:14:44.735 "copy": true, 00:14:44.735 "nvme_iov_md": false 00:14:44.735 }, 00:14:44.735 "memory_domains": [ 00:14:44.735 { 00:14:44.735 "dma_device_id": "system", 00:14:44.735 "dma_device_type": 1 00:14:44.735 }, 00:14:44.735 { 00:14:44.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.735 "dma_device_type": 2 00:14:44.735 } 00:14:44.735 ], 00:14:44.735 "driver_specific": {} 00:14:44.735 } 00:14:44.735 ] 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.735 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.994 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.994 "name": "Existed_Raid", 00:14:44.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.994 "strip_size_kb": 64, 00:14:44.994 "state": "configuring", 00:14:44.994 "raid_level": "raid0", 00:14:44.994 "superblock": false, 00:14:44.994 "num_base_bdevs": 4, 00:14:44.994 "num_base_bdevs_discovered": 2, 00:14:44.994 "num_base_bdevs_operational": 4, 00:14:44.994 "base_bdevs_list": [ 00:14:44.994 { 00:14:44.994 "name": "BaseBdev1", 00:14:44.994 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:44.994 "is_configured": true, 00:14:44.994 "data_offset": 0, 00:14:44.994 "data_size": 65536 00:14:44.994 }, 00:14:44.994 { 00:14:44.994 "name": "BaseBdev2", 00:14:44.994 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:44.994 "is_configured": true, 00:14:44.994 "data_offset": 0, 00:14:44.994 "data_size": 65536 00:14:44.994 }, 00:14:44.994 { 00:14:44.994 "name": "BaseBdev3", 00:14:44.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.994 "is_configured": false, 00:14:44.994 "data_offset": 0, 00:14:44.994 "data_size": 0 00:14:44.994 }, 00:14:44.994 { 00:14:44.994 "name": "BaseBdev4", 00:14:44.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.994 "is_configured": false, 00:14:44.994 "data_offset": 0, 00:14:44.994 "data_size": 0 00:14:44.994 } 00:14:44.994 ] 00:14:44.994 }' 00:14:44.994 10:22:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.994 10:22:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:45.561 [2024-07-15 10:22:10.216709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.561 BaseBdev3 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.561 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:45.821 [ 00:14:45.821 { 00:14:45.821 "name": "BaseBdev3", 00:14:45.821 "aliases": [ 00:14:45.821 "c4acd887-54bd-4fc9-93f7-042e6345be74" 00:14:45.821 ], 00:14:45.821 "product_name": "Malloc disk", 00:14:45.821 "block_size": 512, 00:14:45.821 "num_blocks": 65536, 00:14:45.821 "uuid": "c4acd887-54bd-4fc9-93f7-042e6345be74", 00:14:45.821 "assigned_rate_limits": { 00:14:45.821 "rw_ios_per_sec": 0, 00:14:45.821 "rw_mbytes_per_sec": 0, 00:14:45.821 "r_mbytes_per_sec": 0, 00:14:45.821 "w_mbytes_per_sec": 0 00:14:45.821 }, 00:14:45.821 "claimed": true, 00:14:45.821 "claim_type": "exclusive_write", 00:14:45.821 "zoned": false, 00:14:45.821 "supported_io_types": { 00:14:45.821 "read": true, 00:14:45.821 "write": true, 00:14:45.821 "unmap": true, 00:14:45.821 "flush": true, 00:14:45.821 "reset": true, 00:14:45.821 "nvme_admin": false, 00:14:45.821 "nvme_io": false, 00:14:45.821 "nvme_io_md": false, 00:14:45.821 "write_zeroes": true, 00:14:45.821 "zcopy": true, 00:14:45.821 "get_zone_info": false, 00:14:45.821 "zone_management": false, 00:14:45.821 "zone_append": false, 00:14:45.821 "compare": false, 00:14:45.821 "compare_and_write": false, 00:14:45.821 "abort": true, 00:14:45.821 "seek_hole": false, 00:14:45.821 "seek_data": false, 00:14:45.821 "copy": true, 00:14:45.821 "nvme_iov_md": false 00:14:45.821 }, 00:14:45.821 "memory_domains": [ 00:14:45.821 { 00:14:45.821 "dma_device_id": "system", 00:14:45.821 "dma_device_type": 1 00:14:45.821 }, 00:14:45.821 { 00:14:45.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.821 "dma_device_type": 2 00:14:45.821 } 00:14:45.821 ], 00:14:45.821 "driver_specific": {} 00:14:45.821 } 00:14:45.821 ] 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.821 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.080 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.080 "name": "Existed_Raid", 00:14:46.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.080 "strip_size_kb": 64, 00:14:46.080 "state": "configuring", 00:14:46.080 "raid_level": "raid0", 00:14:46.080 "superblock": false, 00:14:46.080 "num_base_bdevs": 4, 00:14:46.080 "num_base_bdevs_discovered": 3, 00:14:46.080 "num_base_bdevs_operational": 4, 00:14:46.080 "base_bdevs_list": [ 00:14:46.080 { 00:14:46.080 "name": "BaseBdev1", 00:14:46.080 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:46.080 "is_configured": true, 00:14:46.080 "data_offset": 0, 00:14:46.080 "data_size": 65536 00:14:46.080 }, 00:14:46.080 { 00:14:46.081 "name": "BaseBdev2", 00:14:46.081 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:46.081 "is_configured": true, 00:14:46.081 "data_offset": 0, 00:14:46.081 "data_size": 65536 00:14:46.081 }, 00:14:46.081 { 00:14:46.081 "name": "BaseBdev3", 00:14:46.081 "uuid": "c4acd887-54bd-4fc9-93f7-042e6345be74", 00:14:46.081 "is_configured": true, 00:14:46.081 "data_offset": 0, 00:14:46.081 "data_size": 65536 00:14:46.081 }, 00:14:46.081 { 00:14:46.081 "name": "BaseBdev4", 00:14:46.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.081 "is_configured": false, 00:14:46.081 "data_offset": 0, 00:14:46.081 "data_size": 0 00:14:46.081 } 00:14:46.081 ] 00:14:46.081 }' 00:14:46.081 10:22:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.081 10:22:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:46.648 [2024-07-15 10:22:11.382609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.648 [2024-07-15 10:22:11.382638] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1aa5830 00:14:46.648 [2024-07-15 10:22:11.382644] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:46.648 [2024-07-15 10:22:11.382778] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a9e160 00:14:46.648 [2024-07-15 10:22:11.382860] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1aa5830 00:14:46.648 [2024-07-15 10:22:11.382870] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1aa5830 00:14:46.648 [2024-07-15 10:22:11.383012] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.648 BaseBdev4 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.648 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.907 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:47.166 [ 00:14:47.166 { 00:14:47.166 "name": "BaseBdev4", 00:14:47.166 "aliases": [ 00:14:47.166 "9843c5f2-3b98-43df-8fa5-f3a92319b64e" 00:14:47.166 ], 00:14:47.166 "product_name": "Malloc disk", 00:14:47.166 "block_size": 512, 00:14:47.166 "num_blocks": 65536, 00:14:47.167 "uuid": "9843c5f2-3b98-43df-8fa5-f3a92319b64e", 00:14:47.167 "assigned_rate_limits": { 00:14:47.167 "rw_ios_per_sec": 0, 00:14:47.167 "rw_mbytes_per_sec": 0, 00:14:47.167 "r_mbytes_per_sec": 0, 00:14:47.167 "w_mbytes_per_sec": 0 00:14:47.167 }, 00:14:47.167 "claimed": true, 00:14:47.167 "claim_type": "exclusive_write", 00:14:47.167 "zoned": false, 00:14:47.167 "supported_io_types": { 00:14:47.167 "read": true, 00:14:47.167 "write": true, 00:14:47.167 "unmap": true, 00:14:47.167 "flush": true, 00:14:47.167 "reset": true, 00:14:47.167 "nvme_admin": false, 00:14:47.167 "nvme_io": false, 00:14:47.167 "nvme_io_md": false, 00:14:47.167 "write_zeroes": true, 00:14:47.167 "zcopy": true, 00:14:47.167 "get_zone_info": false, 00:14:47.167 "zone_management": false, 00:14:47.167 "zone_append": false, 00:14:47.167 "compare": false, 00:14:47.167 "compare_and_write": false, 00:14:47.167 "abort": true, 00:14:47.167 "seek_hole": false, 00:14:47.167 "seek_data": false, 00:14:47.167 "copy": true, 00:14:47.167 "nvme_iov_md": false 00:14:47.167 }, 00:14:47.167 "memory_domains": [ 00:14:47.167 { 00:14:47.167 "dma_device_id": "system", 00:14:47.167 "dma_device_type": 1 00:14:47.167 }, 00:14:47.167 { 00:14:47.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.167 "dma_device_type": 2 00:14:47.167 } 00:14:47.167 ], 00:14:47.167 "driver_specific": {} 00:14:47.167 } 00:14:47.167 ] 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.167 "name": "Existed_Raid", 00:14:47.167 "uuid": "a8023caa-b420-49e2-9c81-83c4ab45f512", 00:14:47.167 "strip_size_kb": 64, 00:14:47.167 "state": "online", 00:14:47.167 "raid_level": "raid0", 00:14:47.167 "superblock": false, 00:14:47.167 "num_base_bdevs": 4, 00:14:47.167 "num_base_bdevs_discovered": 4, 00:14:47.167 "num_base_bdevs_operational": 4, 00:14:47.167 "base_bdevs_list": [ 00:14:47.167 { 00:14:47.167 "name": "BaseBdev1", 00:14:47.167 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:47.167 "is_configured": true, 00:14:47.167 "data_offset": 0, 00:14:47.167 "data_size": 65536 00:14:47.167 }, 00:14:47.167 { 00:14:47.167 "name": "BaseBdev2", 00:14:47.167 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:47.167 "is_configured": true, 00:14:47.167 "data_offset": 0, 00:14:47.167 "data_size": 65536 00:14:47.167 }, 00:14:47.167 { 00:14:47.167 "name": "BaseBdev3", 00:14:47.167 "uuid": "c4acd887-54bd-4fc9-93f7-042e6345be74", 00:14:47.167 "is_configured": true, 00:14:47.167 "data_offset": 0, 00:14:47.167 "data_size": 65536 00:14:47.167 }, 00:14:47.167 { 00:14:47.167 "name": "BaseBdev4", 00:14:47.167 "uuid": "9843c5f2-3b98-43df-8fa5-f3a92319b64e", 00:14:47.167 "is_configured": true, 00:14:47.167 "data_offset": 0, 00:14:47.167 "data_size": 65536 00:14:47.167 } 00:14:47.167 ] 00:14:47.167 }' 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.167 10:22:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:47.735 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:47.735 [2024-07-15 10:22:12.513732] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.994 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:47.994 "name": "Existed_Raid", 00:14:47.994 "aliases": [ 00:14:47.994 "a8023caa-b420-49e2-9c81-83c4ab45f512" 00:14:47.994 ], 00:14:47.994 "product_name": "Raid Volume", 00:14:47.994 "block_size": 512, 00:14:47.994 "num_blocks": 262144, 00:14:47.994 "uuid": "a8023caa-b420-49e2-9c81-83c4ab45f512", 00:14:47.994 "assigned_rate_limits": { 00:14:47.994 "rw_ios_per_sec": 0, 00:14:47.994 "rw_mbytes_per_sec": 0, 00:14:47.994 "r_mbytes_per_sec": 0, 00:14:47.994 "w_mbytes_per_sec": 0 00:14:47.994 }, 00:14:47.994 "claimed": false, 00:14:47.994 "zoned": false, 00:14:47.994 "supported_io_types": { 00:14:47.994 "read": true, 00:14:47.994 "write": true, 00:14:47.994 "unmap": true, 00:14:47.994 "flush": true, 00:14:47.994 "reset": true, 00:14:47.994 "nvme_admin": false, 00:14:47.994 "nvme_io": false, 00:14:47.994 "nvme_io_md": false, 00:14:47.994 "write_zeroes": true, 00:14:47.994 "zcopy": false, 00:14:47.994 "get_zone_info": false, 00:14:47.994 "zone_management": false, 00:14:47.994 "zone_append": false, 00:14:47.994 "compare": false, 00:14:47.994 "compare_and_write": false, 00:14:47.994 "abort": false, 00:14:47.994 "seek_hole": false, 00:14:47.994 "seek_data": false, 00:14:47.994 "copy": false, 00:14:47.994 "nvme_iov_md": false 00:14:47.994 }, 00:14:47.994 "memory_domains": [ 00:14:47.994 { 00:14:47.994 "dma_device_id": "system", 00:14:47.994 "dma_device_type": 1 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.994 "dma_device_type": 2 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "system", 00:14:47.994 "dma_device_type": 1 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.994 "dma_device_type": 2 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "system", 00:14:47.994 "dma_device_type": 1 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.994 "dma_device_type": 2 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "system", 00:14:47.994 "dma_device_type": 1 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.994 "dma_device_type": 2 00:14:47.994 } 00:14:47.994 ], 00:14:47.994 "driver_specific": { 00:14:47.994 "raid": { 00:14:47.994 "uuid": "a8023caa-b420-49e2-9c81-83c4ab45f512", 00:14:47.994 "strip_size_kb": 64, 00:14:47.994 "state": "online", 00:14:47.994 "raid_level": "raid0", 00:14:47.994 "superblock": false, 00:14:47.994 "num_base_bdevs": 4, 00:14:47.994 "num_base_bdevs_discovered": 4, 00:14:47.994 "num_base_bdevs_operational": 4, 00:14:47.994 "base_bdevs_list": [ 00:14:47.994 { 00:14:47.994 "name": "BaseBdev1", 00:14:47.994 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:47.994 "is_configured": true, 00:14:47.994 "data_offset": 0, 00:14:47.994 "data_size": 65536 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "name": "BaseBdev2", 00:14:47.994 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:47.994 "is_configured": true, 00:14:47.994 "data_offset": 0, 00:14:47.994 "data_size": 65536 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "name": "BaseBdev3", 00:14:47.994 "uuid": "c4acd887-54bd-4fc9-93f7-042e6345be74", 00:14:47.994 "is_configured": true, 00:14:47.994 "data_offset": 0, 00:14:47.994 "data_size": 65536 00:14:47.994 }, 00:14:47.994 { 00:14:47.994 "name": "BaseBdev4", 00:14:47.994 "uuid": "9843c5f2-3b98-43df-8fa5-f3a92319b64e", 00:14:47.994 "is_configured": true, 00:14:47.994 "data_offset": 0, 00:14:47.994 "data_size": 65536 00:14:47.994 } 00:14:47.994 ] 00:14:47.994 } 00:14:47.995 } 00:14:47.995 }' 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:47.995 BaseBdev2 00:14:47.995 BaseBdev3 00:14:47.995 BaseBdev4' 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:47.995 "name": "BaseBdev1", 00:14:47.995 "aliases": [ 00:14:47.995 "131e35a8-115e-452b-9d11-fe8f9f6c4b0f" 00:14:47.995 ], 00:14:47.995 "product_name": "Malloc disk", 00:14:47.995 "block_size": 512, 00:14:47.995 "num_blocks": 65536, 00:14:47.995 "uuid": "131e35a8-115e-452b-9d11-fe8f9f6c4b0f", 00:14:47.995 "assigned_rate_limits": { 00:14:47.995 "rw_ios_per_sec": 0, 00:14:47.995 "rw_mbytes_per_sec": 0, 00:14:47.995 "r_mbytes_per_sec": 0, 00:14:47.995 "w_mbytes_per_sec": 0 00:14:47.995 }, 00:14:47.995 "claimed": true, 00:14:47.995 "claim_type": "exclusive_write", 00:14:47.995 "zoned": false, 00:14:47.995 "supported_io_types": { 00:14:47.995 "read": true, 00:14:47.995 "write": true, 00:14:47.995 "unmap": true, 00:14:47.995 "flush": true, 00:14:47.995 "reset": true, 00:14:47.995 "nvme_admin": false, 00:14:47.995 "nvme_io": false, 00:14:47.995 "nvme_io_md": false, 00:14:47.995 "write_zeroes": true, 00:14:47.995 "zcopy": true, 00:14:47.995 "get_zone_info": false, 00:14:47.995 "zone_management": false, 00:14:47.995 "zone_append": false, 00:14:47.995 "compare": false, 00:14:47.995 "compare_and_write": false, 00:14:47.995 "abort": true, 00:14:47.995 "seek_hole": false, 00:14:47.995 "seek_data": false, 00:14:47.995 "copy": true, 00:14:47.995 "nvme_iov_md": false 00:14:47.995 }, 00:14:47.995 "memory_domains": [ 00:14:47.995 { 00:14:47.995 "dma_device_id": "system", 00:14:47.995 "dma_device_type": 1 00:14:47.995 }, 00:14:47.995 { 00:14:47.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.995 "dma_device_type": 2 00:14:47.995 } 00:14:47.995 ], 00:14:47.995 "driver_specific": {} 00:14:47.995 }' 00:14:47.995 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.253 10:22:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.253 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:48.512 "name": "BaseBdev2", 00:14:48.512 "aliases": [ 00:14:48.512 "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28" 00:14:48.512 ], 00:14:48.512 "product_name": "Malloc disk", 00:14:48.512 "block_size": 512, 00:14:48.512 "num_blocks": 65536, 00:14:48.512 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:48.512 "assigned_rate_limits": { 00:14:48.512 "rw_ios_per_sec": 0, 00:14:48.512 "rw_mbytes_per_sec": 0, 00:14:48.512 "r_mbytes_per_sec": 0, 00:14:48.512 "w_mbytes_per_sec": 0 00:14:48.512 }, 00:14:48.512 "claimed": true, 00:14:48.512 "claim_type": "exclusive_write", 00:14:48.512 "zoned": false, 00:14:48.512 "supported_io_types": { 00:14:48.512 "read": true, 00:14:48.512 "write": true, 00:14:48.512 "unmap": true, 00:14:48.512 "flush": true, 00:14:48.512 "reset": true, 00:14:48.512 "nvme_admin": false, 00:14:48.512 "nvme_io": false, 00:14:48.512 "nvme_io_md": false, 00:14:48.512 "write_zeroes": true, 00:14:48.512 "zcopy": true, 00:14:48.512 "get_zone_info": false, 00:14:48.512 "zone_management": false, 00:14:48.512 "zone_append": false, 00:14:48.512 "compare": false, 00:14:48.512 "compare_and_write": false, 00:14:48.512 "abort": true, 00:14:48.512 "seek_hole": false, 00:14:48.512 "seek_data": false, 00:14:48.512 "copy": true, 00:14:48.512 "nvme_iov_md": false 00:14:48.512 }, 00:14:48.512 "memory_domains": [ 00:14:48.512 { 00:14:48.512 "dma_device_id": "system", 00:14:48.512 "dma_device_type": 1 00:14:48.512 }, 00:14:48.512 { 00:14:48.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.512 "dma_device_type": 2 00:14:48.512 } 00:14:48.512 ], 00:14:48.512 "driver_specific": {} 00:14:48.512 }' 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.512 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:48.771 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:49.052 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:49.052 "name": "BaseBdev3", 00:14:49.052 "aliases": [ 00:14:49.052 "c4acd887-54bd-4fc9-93f7-042e6345be74" 00:14:49.052 ], 00:14:49.052 "product_name": "Malloc disk", 00:14:49.052 "block_size": 512, 00:14:49.052 "num_blocks": 65536, 00:14:49.052 "uuid": "c4acd887-54bd-4fc9-93f7-042e6345be74", 00:14:49.052 "assigned_rate_limits": { 00:14:49.052 "rw_ios_per_sec": 0, 00:14:49.052 "rw_mbytes_per_sec": 0, 00:14:49.052 "r_mbytes_per_sec": 0, 00:14:49.052 "w_mbytes_per_sec": 0 00:14:49.052 }, 00:14:49.052 "claimed": true, 00:14:49.052 "claim_type": "exclusive_write", 00:14:49.052 "zoned": false, 00:14:49.052 "supported_io_types": { 00:14:49.052 "read": true, 00:14:49.052 "write": true, 00:14:49.052 "unmap": true, 00:14:49.052 "flush": true, 00:14:49.052 "reset": true, 00:14:49.052 "nvme_admin": false, 00:14:49.052 "nvme_io": false, 00:14:49.052 "nvme_io_md": false, 00:14:49.052 "write_zeroes": true, 00:14:49.052 "zcopy": true, 00:14:49.052 "get_zone_info": false, 00:14:49.052 "zone_management": false, 00:14:49.052 "zone_append": false, 00:14:49.052 "compare": false, 00:14:49.052 "compare_and_write": false, 00:14:49.052 "abort": true, 00:14:49.052 "seek_hole": false, 00:14:49.052 "seek_data": false, 00:14:49.052 "copy": true, 00:14:49.052 "nvme_iov_md": false 00:14:49.052 }, 00:14:49.052 "memory_domains": [ 00:14:49.052 { 00:14:49.052 "dma_device_id": "system", 00:14:49.052 "dma_device_type": 1 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.052 "dma_device_type": 2 00:14:49.052 } 00:14:49.052 ], 00:14:49.052 "driver_specific": {} 00:14:49.052 }' 00:14:49.052 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:49.052 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:49.052 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:49.052 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:49.335 10:22:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:49.335 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:49.335 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:49.335 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:49.335 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:49.594 "name": "BaseBdev4", 00:14:49.594 "aliases": [ 00:14:49.594 "9843c5f2-3b98-43df-8fa5-f3a92319b64e" 00:14:49.594 ], 00:14:49.594 "product_name": "Malloc disk", 00:14:49.594 "block_size": 512, 00:14:49.594 "num_blocks": 65536, 00:14:49.594 "uuid": "9843c5f2-3b98-43df-8fa5-f3a92319b64e", 00:14:49.594 "assigned_rate_limits": { 00:14:49.594 "rw_ios_per_sec": 0, 00:14:49.594 "rw_mbytes_per_sec": 0, 00:14:49.594 "r_mbytes_per_sec": 0, 00:14:49.594 "w_mbytes_per_sec": 0 00:14:49.594 }, 00:14:49.594 "claimed": true, 00:14:49.594 "claim_type": "exclusive_write", 00:14:49.594 "zoned": false, 00:14:49.594 "supported_io_types": { 00:14:49.594 "read": true, 00:14:49.594 "write": true, 00:14:49.594 "unmap": true, 00:14:49.594 "flush": true, 00:14:49.594 "reset": true, 00:14:49.594 "nvme_admin": false, 00:14:49.594 "nvme_io": false, 00:14:49.594 "nvme_io_md": false, 00:14:49.594 "write_zeroes": true, 00:14:49.594 "zcopy": true, 00:14:49.594 "get_zone_info": false, 00:14:49.594 "zone_management": false, 00:14:49.594 "zone_append": false, 00:14:49.594 "compare": false, 00:14:49.594 "compare_and_write": false, 00:14:49.594 "abort": true, 00:14:49.594 "seek_hole": false, 00:14:49.594 "seek_data": false, 00:14:49.594 "copy": true, 00:14:49.594 "nvme_iov_md": false 00:14:49.594 }, 00:14:49.594 "memory_domains": [ 00:14:49.594 { 00:14:49.594 "dma_device_id": "system", 00:14:49.594 "dma_device_type": 1 00:14:49.594 }, 00:14:49.594 { 00:14:49.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.594 "dma_device_type": 2 00:14:49.594 } 00:14:49.594 ], 00:14:49.594 "driver_specific": {} 00:14:49.594 }' 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:49.594 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:49.852 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:49.852 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:49.853 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:49.853 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:49.853 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:49.853 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.111 [2024-07-15 10:22:14.663139] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.111 [2024-07-15 10:22:14.663160] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.111 [2024-07-15 10:22:14.663194] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.111 "name": "Existed_Raid", 00:14:50.111 "uuid": "a8023caa-b420-49e2-9c81-83c4ab45f512", 00:14:50.111 "strip_size_kb": 64, 00:14:50.111 "state": "offline", 00:14:50.111 "raid_level": "raid0", 00:14:50.111 "superblock": false, 00:14:50.111 "num_base_bdevs": 4, 00:14:50.111 "num_base_bdevs_discovered": 3, 00:14:50.111 "num_base_bdevs_operational": 3, 00:14:50.111 "base_bdevs_list": [ 00:14:50.111 { 00:14:50.111 "name": null, 00:14:50.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.111 "is_configured": false, 00:14:50.111 "data_offset": 0, 00:14:50.111 "data_size": 65536 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "BaseBdev2", 00:14:50.111 "uuid": "b2d8ccc6-f0fb-4850-9a77-50dfb4c03a28", 00:14:50.111 "is_configured": true, 00:14:50.111 "data_offset": 0, 00:14:50.111 "data_size": 65536 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "BaseBdev3", 00:14:50.111 "uuid": "c4acd887-54bd-4fc9-93f7-042e6345be74", 00:14:50.111 "is_configured": true, 00:14:50.111 "data_offset": 0, 00:14:50.111 "data_size": 65536 00:14:50.111 }, 00:14:50.111 { 00:14:50.111 "name": "BaseBdev4", 00:14:50.111 "uuid": "9843c5f2-3b98-43df-8fa5-f3a92319b64e", 00:14:50.111 "is_configured": true, 00:14:50.111 "data_offset": 0, 00:14:50.111 "data_size": 65536 00:14:50.111 } 00:14:50.111 ] 00:14:50.111 }' 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.111 10:22:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.678 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:50.678 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:50.678 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.678 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:50.937 [2024-07-15 10:22:15.642458] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.937 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:51.195 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:51.195 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.195 10:22:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:51.454 [2024-07-15 10:22:15.993067] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.454 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:51.713 [2024-07-15 10:22:16.347477] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:51.713 [2024-07-15 10:22:16.347510] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aa5830 name Existed_Raid, state offline 00:14:51.713 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:51.713 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:51.713 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.713 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.971 BaseBdev2 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:51.971 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.228 10:22:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.486 [ 00:14:52.486 { 00:14:52.486 "name": "BaseBdev2", 00:14:52.486 "aliases": [ 00:14:52.486 "4af2f039-c5ed-4d21-8e97-e21ee3b30c35" 00:14:52.486 ], 00:14:52.486 "product_name": "Malloc disk", 00:14:52.486 "block_size": 512, 00:14:52.486 "num_blocks": 65536, 00:14:52.486 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:52.486 "assigned_rate_limits": { 00:14:52.486 "rw_ios_per_sec": 0, 00:14:52.486 "rw_mbytes_per_sec": 0, 00:14:52.486 "r_mbytes_per_sec": 0, 00:14:52.486 "w_mbytes_per_sec": 0 00:14:52.486 }, 00:14:52.486 "claimed": false, 00:14:52.486 "zoned": false, 00:14:52.486 "supported_io_types": { 00:14:52.486 "read": true, 00:14:52.486 "write": true, 00:14:52.486 "unmap": true, 00:14:52.486 "flush": true, 00:14:52.486 "reset": true, 00:14:52.486 "nvme_admin": false, 00:14:52.486 "nvme_io": false, 00:14:52.486 "nvme_io_md": false, 00:14:52.486 "write_zeroes": true, 00:14:52.486 "zcopy": true, 00:14:52.486 "get_zone_info": false, 00:14:52.486 "zone_management": false, 00:14:52.486 "zone_append": false, 00:14:52.486 "compare": false, 00:14:52.486 "compare_and_write": false, 00:14:52.486 "abort": true, 00:14:52.486 "seek_hole": false, 00:14:52.486 "seek_data": false, 00:14:52.486 "copy": true, 00:14:52.486 "nvme_iov_md": false 00:14:52.486 }, 00:14:52.486 "memory_domains": [ 00:14:52.486 { 00:14:52.486 "dma_device_id": "system", 00:14:52.486 "dma_device_type": 1 00:14:52.486 }, 00:14:52.486 { 00:14:52.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.486 "dma_device_type": 2 00:14:52.486 } 00:14:52.486 ], 00:14:52.486 "driver_specific": {} 00:14:52.486 } 00:14:52.486 ] 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.486 BaseBdev3 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.486 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.744 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.744 [ 00:14:52.744 { 00:14:52.744 "name": "BaseBdev3", 00:14:52.744 "aliases": [ 00:14:52.744 "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c" 00:14:52.744 ], 00:14:52.744 "product_name": "Malloc disk", 00:14:52.744 "block_size": 512, 00:14:52.744 "num_blocks": 65536, 00:14:52.744 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:52.744 "assigned_rate_limits": { 00:14:52.744 "rw_ios_per_sec": 0, 00:14:52.744 "rw_mbytes_per_sec": 0, 00:14:52.744 "r_mbytes_per_sec": 0, 00:14:52.744 "w_mbytes_per_sec": 0 00:14:52.744 }, 00:14:52.744 "claimed": false, 00:14:52.744 "zoned": false, 00:14:52.744 "supported_io_types": { 00:14:52.744 "read": true, 00:14:52.744 "write": true, 00:14:52.744 "unmap": true, 00:14:52.744 "flush": true, 00:14:52.744 "reset": true, 00:14:52.744 "nvme_admin": false, 00:14:52.744 "nvme_io": false, 00:14:52.744 "nvme_io_md": false, 00:14:52.744 "write_zeroes": true, 00:14:52.744 "zcopy": true, 00:14:52.744 "get_zone_info": false, 00:14:52.744 "zone_management": false, 00:14:52.744 "zone_append": false, 00:14:52.744 "compare": false, 00:14:52.744 "compare_and_write": false, 00:14:52.744 "abort": true, 00:14:52.744 "seek_hole": false, 00:14:52.744 "seek_data": false, 00:14:52.744 "copy": true, 00:14:52.744 "nvme_iov_md": false 00:14:52.744 }, 00:14:52.744 "memory_domains": [ 00:14:52.744 { 00:14:52.744 "dma_device_id": "system", 00:14:52.744 "dma_device_type": 1 00:14:52.744 }, 00:14:52.744 { 00:14:52.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.744 "dma_device_type": 2 00:14:52.744 } 00:14:52.744 ], 00:14:52.744 "driver_specific": {} 00:14:52.744 } 00:14:52.744 ] 00:14:52.744 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:52.744 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:52.744 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:52.744 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:53.002 BaseBdev4 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:53.002 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.260 10:22:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:53.260 [ 00:14:53.260 { 00:14:53.260 "name": "BaseBdev4", 00:14:53.260 "aliases": [ 00:14:53.260 "20338318-22b2-4219-8607-bf71feba2592" 00:14:53.260 ], 00:14:53.260 "product_name": "Malloc disk", 00:14:53.260 "block_size": 512, 00:14:53.260 "num_blocks": 65536, 00:14:53.260 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:53.260 "assigned_rate_limits": { 00:14:53.260 "rw_ios_per_sec": 0, 00:14:53.260 "rw_mbytes_per_sec": 0, 00:14:53.260 "r_mbytes_per_sec": 0, 00:14:53.260 "w_mbytes_per_sec": 0 00:14:53.260 }, 00:14:53.260 "claimed": false, 00:14:53.260 "zoned": false, 00:14:53.260 "supported_io_types": { 00:14:53.260 "read": true, 00:14:53.260 "write": true, 00:14:53.260 "unmap": true, 00:14:53.260 "flush": true, 00:14:53.261 "reset": true, 00:14:53.261 "nvme_admin": false, 00:14:53.261 "nvme_io": false, 00:14:53.261 "nvme_io_md": false, 00:14:53.261 "write_zeroes": true, 00:14:53.261 "zcopy": true, 00:14:53.261 "get_zone_info": false, 00:14:53.261 "zone_management": false, 00:14:53.261 "zone_append": false, 00:14:53.261 "compare": false, 00:14:53.261 "compare_and_write": false, 00:14:53.261 "abort": true, 00:14:53.261 "seek_hole": false, 00:14:53.261 "seek_data": false, 00:14:53.261 "copy": true, 00:14:53.261 "nvme_iov_md": false 00:14:53.261 }, 00:14:53.261 "memory_domains": [ 00:14:53.261 { 00:14:53.261 "dma_device_id": "system", 00:14:53.261 "dma_device_type": 1 00:14:53.261 }, 00:14:53.261 { 00:14:53.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.261 "dma_device_type": 2 00:14:53.261 } 00:14:53.261 ], 00:14:53.261 "driver_specific": {} 00:14:53.261 } 00:14:53.261 ] 00:14:53.261 10:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:53.261 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:53.261 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:53.261 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:53.519 [2024-07-15 10:22:18.193261] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.519 [2024-07-15 10:22:18.193296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.519 [2024-07-15 10:22:18.193309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.519 [2024-07-15 10:22:18.194296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.519 [2024-07-15 10:22:18.194325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.519 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.783 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.783 "name": "Existed_Raid", 00:14:53.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.783 "strip_size_kb": 64, 00:14:53.783 "state": "configuring", 00:14:53.783 "raid_level": "raid0", 00:14:53.783 "superblock": false, 00:14:53.783 "num_base_bdevs": 4, 00:14:53.783 "num_base_bdevs_discovered": 3, 00:14:53.783 "num_base_bdevs_operational": 4, 00:14:53.783 "base_bdevs_list": [ 00:14:53.783 { 00:14:53.783 "name": "BaseBdev1", 00:14:53.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.783 "is_configured": false, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 0 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev2", 00:14:53.783 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev3", 00:14:53.783 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 }, 00:14:53.783 { 00:14:53.783 "name": "BaseBdev4", 00:14:53.783 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:53.783 "is_configured": true, 00:14:53.783 "data_offset": 0, 00:14:53.783 "data_size": 65536 00:14:53.783 } 00:14:53.783 ] 00:14:53.783 }' 00:14:53.783 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.783 10:22:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.347 10:22:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:54.347 [2024-07-15 10:22:19.023383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:54.347 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:54.347 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.348 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.605 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.605 "name": "Existed_Raid", 00:14:54.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.605 "strip_size_kb": 64, 00:14:54.605 "state": "configuring", 00:14:54.605 "raid_level": "raid0", 00:14:54.605 "superblock": false, 00:14:54.605 "num_base_bdevs": 4, 00:14:54.605 "num_base_bdevs_discovered": 2, 00:14:54.605 "num_base_bdevs_operational": 4, 00:14:54.605 "base_bdevs_list": [ 00:14:54.605 { 00:14:54.605 "name": "BaseBdev1", 00:14:54.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.605 "is_configured": false, 00:14:54.605 "data_offset": 0, 00:14:54.605 "data_size": 0 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "name": null, 00:14:54.605 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:54.605 "is_configured": false, 00:14:54.605 "data_offset": 0, 00:14:54.605 "data_size": 65536 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "name": "BaseBdev3", 00:14:54.605 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:54.605 "is_configured": true, 00:14:54.605 "data_offset": 0, 00:14:54.605 "data_size": 65536 00:14:54.605 }, 00:14:54.605 { 00:14:54.605 "name": "BaseBdev4", 00:14:54.605 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:54.605 "is_configured": true, 00:14:54.605 "data_offset": 0, 00:14:54.605 "data_size": 65536 00:14:54.605 } 00:14:54.605 ] 00:14:54.605 }' 00:14:54.605 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.605 10:22:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.170 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.170 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.170 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:55.170 10:22:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.427 [2024-07-15 10:22:20.052926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.427 BaseBdev1 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:55.427 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.683 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.683 [ 00:14:55.683 { 00:14:55.683 "name": "BaseBdev1", 00:14:55.683 "aliases": [ 00:14:55.683 "534b37e4-2a4f-494f-82e4-494a8ed8fbe7" 00:14:55.683 ], 00:14:55.683 "product_name": "Malloc disk", 00:14:55.683 "block_size": 512, 00:14:55.683 "num_blocks": 65536, 00:14:55.683 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:14:55.683 "assigned_rate_limits": { 00:14:55.683 "rw_ios_per_sec": 0, 00:14:55.683 "rw_mbytes_per_sec": 0, 00:14:55.683 "r_mbytes_per_sec": 0, 00:14:55.683 "w_mbytes_per_sec": 0 00:14:55.683 }, 00:14:55.683 "claimed": true, 00:14:55.683 "claim_type": "exclusive_write", 00:14:55.683 "zoned": false, 00:14:55.683 "supported_io_types": { 00:14:55.683 "read": true, 00:14:55.683 "write": true, 00:14:55.684 "unmap": true, 00:14:55.684 "flush": true, 00:14:55.684 "reset": true, 00:14:55.684 "nvme_admin": false, 00:14:55.684 "nvme_io": false, 00:14:55.684 "nvme_io_md": false, 00:14:55.684 "write_zeroes": true, 00:14:55.684 "zcopy": true, 00:14:55.684 "get_zone_info": false, 00:14:55.684 "zone_management": false, 00:14:55.684 "zone_append": false, 00:14:55.684 "compare": false, 00:14:55.684 "compare_and_write": false, 00:14:55.684 "abort": true, 00:14:55.684 "seek_hole": false, 00:14:55.684 "seek_data": false, 00:14:55.684 "copy": true, 00:14:55.684 "nvme_iov_md": false 00:14:55.684 }, 00:14:55.684 "memory_domains": [ 00:14:55.684 { 00:14:55.684 "dma_device_id": "system", 00:14:55.684 "dma_device_type": 1 00:14:55.684 }, 00:14:55.684 { 00:14:55.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.684 "dma_device_type": 2 00:14:55.684 } 00:14:55.684 ], 00:14:55.684 "driver_specific": {} 00:14:55.684 } 00:14:55.684 ] 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.684 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.941 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.941 "name": "Existed_Raid", 00:14:55.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.941 "strip_size_kb": 64, 00:14:55.941 "state": "configuring", 00:14:55.941 "raid_level": "raid0", 00:14:55.941 "superblock": false, 00:14:55.941 "num_base_bdevs": 4, 00:14:55.941 "num_base_bdevs_discovered": 3, 00:14:55.941 "num_base_bdevs_operational": 4, 00:14:55.941 "base_bdevs_list": [ 00:14:55.941 { 00:14:55.941 "name": "BaseBdev1", 00:14:55.941 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:14:55.941 "is_configured": true, 00:14:55.941 "data_offset": 0, 00:14:55.941 "data_size": 65536 00:14:55.941 }, 00:14:55.941 { 00:14:55.941 "name": null, 00:14:55.941 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:55.941 "is_configured": false, 00:14:55.941 "data_offset": 0, 00:14:55.941 "data_size": 65536 00:14:55.941 }, 00:14:55.941 { 00:14:55.941 "name": "BaseBdev3", 00:14:55.941 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:55.941 "is_configured": true, 00:14:55.941 "data_offset": 0, 00:14:55.941 "data_size": 65536 00:14:55.941 }, 00:14:55.941 { 00:14:55.941 "name": "BaseBdev4", 00:14:55.941 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:55.941 "is_configured": true, 00:14:55.941 "data_offset": 0, 00:14:55.941 "data_size": 65536 00:14:55.941 } 00:14:55.941 ] 00:14:55.941 }' 00:14:55.941 10:22:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.941 10:22:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.504 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.504 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.504 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:56.504 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:56.760 [2024-07-15 10:22:21.376401] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.760 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.017 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.017 "name": "Existed_Raid", 00:14:57.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.017 "strip_size_kb": 64, 00:14:57.017 "state": "configuring", 00:14:57.017 "raid_level": "raid0", 00:14:57.017 "superblock": false, 00:14:57.017 "num_base_bdevs": 4, 00:14:57.017 "num_base_bdevs_discovered": 2, 00:14:57.017 "num_base_bdevs_operational": 4, 00:14:57.017 "base_bdevs_list": [ 00:14:57.017 { 00:14:57.017 "name": "BaseBdev1", 00:14:57.017 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:14:57.017 "is_configured": true, 00:14:57.017 "data_offset": 0, 00:14:57.017 "data_size": 65536 00:14:57.017 }, 00:14:57.017 { 00:14:57.017 "name": null, 00:14:57.017 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:57.017 "is_configured": false, 00:14:57.017 "data_offset": 0, 00:14:57.017 "data_size": 65536 00:14:57.017 }, 00:14:57.017 { 00:14:57.017 "name": null, 00:14:57.017 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:57.017 "is_configured": false, 00:14:57.017 "data_offset": 0, 00:14:57.017 "data_size": 65536 00:14:57.017 }, 00:14:57.017 { 00:14:57.017 "name": "BaseBdev4", 00:14:57.017 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:57.017 "is_configured": true, 00:14:57.017 "data_offset": 0, 00:14:57.017 "data_size": 65536 00:14:57.017 } 00:14:57.017 ] 00:14:57.017 }' 00:14:57.017 10:22:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.017 10:22:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.582 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.582 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:57.582 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:57.582 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:57.840 [2024-07-15 10:22:22.399042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.840 "name": "Existed_Raid", 00:14:57.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.840 "strip_size_kb": 64, 00:14:57.840 "state": "configuring", 00:14:57.840 "raid_level": "raid0", 00:14:57.840 "superblock": false, 00:14:57.840 "num_base_bdevs": 4, 00:14:57.840 "num_base_bdevs_discovered": 3, 00:14:57.840 "num_base_bdevs_operational": 4, 00:14:57.840 "base_bdevs_list": [ 00:14:57.840 { 00:14:57.840 "name": "BaseBdev1", 00:14:57.840 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:14:57.840 "is_configured": true, 00:14:57.840 "data_offset": 0, 00:14:57.840 "data_size": 65536 00:14:57.840 }, 00:14:57.840 { 00:14:57.840 "name": null, 00:14:57.840 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:57.840 "is_configured": false, 00:14:57.840 "data_offset": 0, 00:14:57.840 "data_size": 65536 00:14:57.840 }, 00:14:57.840 { 00:14:57.840 "name": "BaseBdev3", 00:14:57.840 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:57.840 "is_configured": true, 00:14:57.840 "data_offset": 0, 00:14:57.840 "data_size": 65536 00:14:57.840 }, 00:14:57.840 { 00:14:57.840 "name": "BaseBdev4", 00:14:57.840 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:57.840 "is_configured": true, 00:14:57.840 "data_offset": 0, 00:14:57.840 "data_size": 65536 00:14:57.840 } 00:14:57.840 ] 00:14:57.840 }' 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.840 10:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.406 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.406 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:58.664 [2024-07-15 10:22:23.373560] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.664 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.923 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.923 "name": "Existed_Raid", 00:14:58.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.923 "strip_size_kb": 64, 00:14:58.923 "state": "configuring", 00:14:58.923 "raid_level": "raid0", 00:14:58.923 "superblock": false, 00:14:58.923 "num_base_bdevs": 4, 00:14:58.923 "num_base_bdevs_discovered": 2, 00:14:58.923 "num_base_bdevs_operational": 4, 00:14:58.923 "base_bdevs_list": [ 00:14:58.923 { 00:14:58.923 "name": null, 00:14:58.923 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:14:58.923 "is_configured": false, 00:14:58.923 "data_offset": 0, 00:14:58.923 "data_size": 65536 00:14:58.923 }, 00:14:58.923 { 00:14:58.923 "name": null, 00:14:58.923 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:14:58.923 "is_configured": false, 00:14:58.923 "data_offset": 0, 00:14:58.923 "data_size": 65536 00:14:58.923 }, 00:14:58.923 { 00:14:58.923 "name": "BaseBdev3", 00:14:58.923 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:14:58.923 "is_configured": true, 00:14:58.923 "data_offset": 0, 00:14:58.923 "data_size": 65536 00:14:58.923 }, 00:14:58.923 { 00:14:58.923 "name": "BaseBdev4", 00:14:58.923 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:14:58.923 "is_configured": true, 00:14:58.923 "data_offset": 0, 00:14:58.923 "data_size": 65536 00:14:58.923 } 00:14:58.923 ] 00:14:58.923 }' 00:14:58.923 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.923 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.486 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.486 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:59.486 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:59.486 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:59.743 [2024-07-15 10:22:24.365971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.743 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.001 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.001 "name": "Existed_Raid", 00:15:00.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.001 "strip_size_kb": 64, 00:15:00.001 "state": "configuring", 00:15:00.001 "raid_level": "raid0", 00:15:00.001 "superblock": false, 00:15:00.001 "num_base_bdevs": 4, 00:15:00.001 "num_base_bdevs_discovered": 3, 00:15:00.001 "num_base_bdevs_operational": 4, 00:15:00.001 "base_bdevs_list": [ 00:15:00.001 { 00:15:00.001 "name": null, 00:15:00.001 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:15:00.001 "is_configured": false, 00:15:00.001 "data_offset": 0, 00:15:00.001 "data_size": 65536 00:15:00.001 }, 00:15:00.001 { 00:15:00.001 "name": "BaseBdev2", 00:15:00.001 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:15:00.001 "is_configured": true, 00:15:00.001 "data_offset": 0, 00:15:00.001 "data_size": 65536 00:15:00.001 }, 00:15:00.001 { 00:15:00.001 "name": "BaseBdev3", 00:15:00.001 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:15:00.002 "is_configured": true, 00:15:00.002 "data_offset": 0, 00:15:00.002 "data_size": 65536 00:15:00.002 }, 00:15:00.002 { 00:15:00.002 "name": "BaseBdev4", 00:15:00.002 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:15:00.002 "is_configured": true, 00:15:00.002 "data_offset": 0, 00:15:00.002 "data_size": 65536 00:15:00.002 } 00:15:00.002 ] 00:15:00.002 }' 00:15:00.002 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.002 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.260 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.260 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.518 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:00.518 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.518 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:00.776 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 534b37e4-2a4f-494f-82e4-494a8ed8fbe7 00:15:00.776 [2024-07-15 10:22:25.539661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:00.776 [2024-07-15 10:22:25.539689] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a9b6f0 00:15:00.776 [2024-07-15 10:22:25.539694] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:00.776 [2024-07-15 10:22:25.539830] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aa73d0 00:15:00.776 [2024-07-15 10:22:25.539916] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a9b6f0 00:15:00.777 [2024-07-15 10:22:25.539933] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a9b6f0 00:15:00.777 [2024-07-15 10:22:25.540063] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.777 NewBaseBdev 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.777 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.035 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:01.293 [ 00:15:01.293 { 00:15:01.293 "name": "NewBaseBdev", 00:15:01.293 "aliases": [ 00:15:01.293 "534b37e4-2a4f-494f-82e4-494a8ed8fbe7" 00:15:01.293 ], 00:15:01.293 "product_name": "Malloc disk", 00:15:01.293 "block_size": 512, 00:15:01.293 "num_blocks": 65536, 00:15:01.293 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:15:01.293 "assigned_rate_limits": { 00:15:01.293 "rw_ios_per_sec": 0, 00:15:01.293 "rw_mbytes_per_sec": 0, 00:15:01.293 "r_mbytes_per_sec": 0, 00:15:01.293 "w_mbytes_per_sec": 0 00:15:01.293 }, 00:15:01.293 "claimed": true, 00:15:01.293 "claim_type": "exclusive_write", 00:15:01.293 "zoned": false, 00:15:01.293 "supported_io_types": { 00:15:01.293 "read": true, 00:15:01.293 "write": true, 00:15:01.293 "unmap": true, 00:15:01.293 "flush": true, 00:15:01.293 "reset": true, 00:15:01.293 "nvme_admin": false, 00:15:01.293 "nvme_io": false, 00:15:01.293 "nvme_io_md": false, 00:15:01.293 "write_zeroes": true, 00:15:01.293 "zcopy": true, 00:15:01.293 "get_zone_info": false, 00:15:01.293 "zone_management": false, 00:15:01.293 "zone_append": false, 00:15:01.293 "compare": false, 00:15:01.293 "compare_and_write": false, 00:15:01.293 "abort": true, 00:15:01.293 "seek_hole": false, 00:15:01.293 "seek_data": false, 00:15:01.293 "copy": true, 00:15:01.293 "nvme_iov_md": false 00:15:01.293 }, 00:15:01.293 "memory_domains": [ 00:15:01.293 { 00:15:01.293 "dma_device_id": "system", 00:15:01.293 "dma_device_type": 1 00:15:01.293 }, 00:15:01.293 { 00:15:01.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.293 "dma_device_type": 2 00:15:01.293 } 00:15:01.293 ], 00:15:01.293 "driver_specific": {} 00:15:01.293 } 00:15:01.293 ] 00:15:01.293 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:01.293 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.294 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.294 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.294 "name": "Existed_Raid", 00:15:01.294 "uuid": "a271fba7-a33b-4e95-afbe-6c461dadf33d", 00:15:01.294 "strip_size_kb": 64, 00:15:01.294 "state": "online", 00:15:01.294 "raid_level": "raid0", 00:15:01.294 "superblock": false, 00:15:01.294 "num_base_bdevs": 4, 00:15:01.294 "num_base_bdevs_discovered": 4, 00:15:01.294 "num_base_bdevs_operational": 4, 00:15:01.294 "base_bdevs_list": [ 00:15:01.294 { 00:15:01.294 "name": "NewBaseBdev", 00:15:01.294 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:15:01.294 "is_configured": true, 00:15:01.294 "data_offset": 0, 00:15:01.294 "data_size": 65536 00:15:01.294 }, 00:15:01.294 { 00:15:01.294 "name": "BaseBdev2", 00:15:01.294 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:15:01.294 "is_configured": true, 00:15:01.294 "data_offset": 0, 00:15:01.294 "data_size": 65536 00:15:01.294 }, 00:15:01.294 { 00:15:01.294 "name": "BaseBdev3", 00:15:01.294 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:15:01.294 "is_configured": true, 00:15:01.294 "data_offset": 0, 00:15:01.294 "data_size": 65536 00:15:01.294 }, 00:15:01.294 { 00:15:01.294 "name": "BaseBdev4", 00:15:01.294 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:15:01.294 "is_configured": true, 00:15:01.294 "data_offset": 0, 00:15:01.294 "data_size": 65536 00:15:01.294 } 00:15:01.294 ] 00:15:01.294 }' 00:15:01.294 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.294 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:01.861 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:02.119 [2024-07-15 10:22:26.658739] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.119 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:02.119 "name": "Existed_Raid", 00:15:02.119 "aliases": [ 00:15:02.119 "a271fba7-a33b-4e95-afbe-6c461dadf33d" 00:15:02.119 ], 00:15:02.119 "product_name": "Raid Volume", 00:15:02.119 "block_size": 512, 00:15:02.119 "num_blocks": 262144, 00:15:02.119 "uuid": "a271fba7-a33b-4e95-afbe-6c461dadf33d", 00:15:02.119 "assigned_rate_limits": { 00:15:02.119 "rw_ios_per_sec": 0, 00:15:02.119 "rw_mbytes_per_sec": 0, 00:15:02.119 "r_mbytes_per_sec": 0, 00:15:02.119 "w_mbytes_per_sec": 0 00:15:02.119 }, 00:15:02.119 "claimed": false, 00:15:02.119 "zoned": false, 00:15:02.119 "supported_io_types": { 00:15:02.119 "read": true, 00:15:02.119 "write": true, 00:15:02.119 "unmap": true, 00:15:02.119 "flush": true, 00:15:02.119 "reset": true, 00:15:02.119 "nvme_admin": false, 00:15:02.119 "nvme_io": false, 00:15:02.119 "nvme_io_md": false, 00:15:02.119 "write_zeroes": true, 00:15:02.119 "zcopy": false, 00:15:02.119 "get_zone_info": false, 00:15:02.120 "zone_management": false, 00:15:02.120 "zone_append": false, 00:15:02.120 "compare": false, 00:15:02.120 "compare_and_write": false, 00:15:02.120 "abort": false, 00:15:02.120 "seek_hole": false, 00:15:02.120 "seek_data": false, 00:15:02.120 "copy": false, 00:15:02.120 "nvme_iov_md": false 00:15:02.120 }, 00:15:02.120 "memory_domains": [ 00:15:02.120 { 00:15:02.120 "dma_device_id": "system", 00:15:02.120 "dma_device_type": 1 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.120 "dma_device_type": 2 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "system", 00:15:02.120 "dma_device_type": 1 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.120 "dma_device_type": 2 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "system", 00:15:02.120 "dma_device_type": 1 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.120 "dma_device_type": 2 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "system", 00:15:02.120 "dma_device_type": 1 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.120 "dma_device_type": 2 00:15:02.120 } 00:15:02.120 ], 00:15:02.120 "driver_specific": { 00:15:02.120 "raid": { 00:15:02.120 "uuid": "a271fba7-a33b-4e95-afbe-6c461dadf33d", 00:15:02.120 "strip_size_kb": 64, 00:15:02.120 "state": "online", 00:15:02.120 "raid_level": "raid0", 00:15:02.120 "superblock": false, 00:15:02.120 "num_base_bdevs": 4, 00:15:02.120 "num_base_bdevs_discovered": 4, 00:15:02.120 "num_base_bdevs_operational": 4, 00:15:02.120 "base_bdevs_list": [ 00:15:02.120 { 00:15:02.120 "name": "NewBaseBdev", 00:15:02.120 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:15:02.120 "is_configured": true, 00:15:02.120 "data_offset": 0, 00:15:02.120 "data_size": 65536 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "name": "BaseBdev2", 00:15:02.120 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:15:02.120 "is_configured": true, 00:15:02.120 "data_offset": 0, 00:15:02.120 "data_size": 65536 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "name": "BaseBdev3", 00:15:02.120 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:15:02.120 "is_configured": true, 00:15:02.120 "data_offset": 0, 00:15:02.120 "data_size": 65536 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "name": "BaseBdev4", 00:15:02.120 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:15:02.120 "is_configured": true, 00:15:02.120 "data_offset": 0, 00:15:02.120 "data_size": 65536 00:15:02.120 } 00:15:02.120 ] 00:15:02.120 } 00:15:02.120 } 00:15:02.120 }' 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:02.120 BaseBdev2 00:15:02.120 BaseBdev3 00:15:02.120 BaseBdev4' 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.120 "name": "NewBaseBdev", 00:15:02.120 "aliases": [ 00:15:02.120 "534b37e4-2a4f-494f-82e4-494a8ed8fbe7" 00:15:02.120 ], 00:15:02.120 "product_name": "Malloc disk", 00:15:02.120 "block_size": 512, 00:15:02.120 "num_blocks": 65536, 00:15:02.120 "uuid": "534b37e4-2a4f-494f-82e4-494a8ed8fbe7", 00:15:02.120 "assigned_rate_limits": { 00:15:02.120 "rw_ios_per_sec": 0, 00:15:02.120 "rw_mbytes_per_sec": 0, 00:15:02.120 "r_mbytes_per_sec": 0, 00:15:02.120 "w_mbytes_per_sec": 0 00:15:02.120 }, 00:15:02.120 "claimed": true, 00:15:02.120 "claim_type": "exclusive_write", 00:15:02.120 "zoned": false, 00:15:02.120 "supported_io_types": { 00:15:02.120 "read": true, 00:15:02.120 "write": true, 00:15:02.120 "unmap": true, 00:15:02.120 "flush": true, 00:15:02.120 "reset": true, 00:15:02.120 "nvme_admin": false, 00:15:02.120 "nvme_io": false, 00:15:02.120 "nvme_io_md": false, 00:15:02.120 "write_zeroes": true, 00:15:02.120 "zcopy": true, 00:15:02.120 "get_zone_info": false, 00:15:02.120 "zone_management": false, 00:15:02.120 "zone_append": false, 00:15:02.120 "compare": false, 00:15:02.120 "compare_and_write": false, 00:15:02.120 "abort": true, 00:15:02.120 "seek_hole": false, 00:15:02.120 "seek_data": false, 00:15:02.120 "copy": true, 00:15:02.120 "nvme_iov_md": false 00:15:02.120 }, 00:15:02.120 "memory_domains": [ 00:15:02.120 { 00:15:02.120 "dma_device_id": "system", 00:15:02.120 "dma_device_type": 1 00:15:02.120 }, 00:15:02.120 { 00:15:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.120 "dma_device_type": 2 00:15:02.120 } 00:15:02.120 ], 00:15:02.120 "driver_specific": {} 00:15:02.120 }' 00:15:02.120 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.379 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.379 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.379 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.379 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.379 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.379 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.379 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.379 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.379 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.379 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.638 "name": "BaseBdev2", 00:15:02.638 "aliases": [ 00:15:02.638 "4af2f039-c5ed-4d21-8e97-e21ee3b30c35" 00:15:02.638 ], 00:15:02.638 "product_name": "Malloc disk", 00:15:02.638 "block_size": 512, 00:15:02.638 "num_blocks": 65536, 00:15:02.638 "uuid": "4af2f039-c5ed-4d21-8e97-e21ee3b30c35", 00:15:02.638 "assigned_rate_limits": { 00:15:02.638 "rw_ios_per_sec": 0, 00:15:02.638 "rw_mbytes_per_sec": 0, 00:15:02.638 "r_mbytes_per_sec": 0, 00:15:02.638 "w_mbytes_per_sec": 0 00:15:02.638 }, 00:15:02.638 "claimed": true, 00:15:02.638 "claim_type": "exclusive_write", 00:15:02.638 "zoned": false, 00:15:02.638 "supported_io_types": { 00:15:02.638 "read": true, 00:15:02.638 "write": true, 00:15:02.638 "unmap": true, 00:15:02.638 "flush": true, 00:15:02.638 "reset": true, 00:15:02.638 "nvme_admin": false, 00:15:02.638 "nvme_io": false, 00:15:02.638 "nvme_io_md": false, 00:15:02.638 "write_zeroes": true, 00:15:02.638 "zcopy": true, 00:15:02.638 "get_zone_info": false, 00:15:02.638 "zone_management": false, 00:15:02.638 "zone_append": false, 00:15:02.638 "compare": false, 00:15:02.638 "compare_and_write": false, 00:15:02.638 "abort": true, 00:15:02.638 "seek_hole": false, 00:15:02.638 "seek_data": false, 00:15:02.638 "copy": true, 00:15:02.638 "nvme_iov_md": false 00:15:02.638 }, 00:15:02.638 "memory_domains": [ 00:15:02.638 { 00:15:02.638 "dma_device_id": "system", 00:15:02.638 "dma_device_type": 1 00:15:02.638 }, 00:15:02.638 { 00:15:02.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.638 "dma_device_type": 2 00:15:02.638 } 00:15:02.638 ], 00:15:02.638 "driver_specific": {} 00:15:02.638 }' 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.638 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:02.896 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.161 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.161 "name": "BaseBdev3", 00:15:03.161 "aliases": [ 00:15:03.161 "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c" 00:15:03.161 ], 00:15:03.161 "product_name": "Malloc disk", 00:15:03.161 "block_size": 512, 00:15:03.161 "num_blocks": 65536, 00:15:03.161 "uuid": "be3c50d4-897a-4b9e-97ce-da37f4a8fe1c", 00:15:03.161 "assigned_rate_limits": { 00:15:03.161 "rw_ios_per_sec": 0, 00:15:03.161 "rw_mbytes_per_sec": 0, 00:15:03.161 "r_mbytes_per_sec": 0, 00:15:03.161 "w_mbytes_per_sec": 0 00:15:03.161 }, 00:15:03.161 "claimed": true, 00:15:03.161 "claim_type": "exclusive_write", 00:15:03.161 "zoned": false, 00:15:03.161 "supported_io_types": { 00:15:03.161 "read": true, 00:15:03.161 "write": true, 00:15:03.161 "unmap": true, 00:15:03.161 "flush": true, 00:15:03.161 "reset": true, 00:15:03.161 "nvme_admin": false, 00:15:03.161 "nvme_io": false, 00:15:03.161 "nvme_io_md": false, 00:15:03.161 "write_zeroes": true, 00:15:03.161 "zcopy": true, 00:15:03.161 "get_zone_info": false, 00:15:03.161 "zone_management": false, 00:15:03.161 "zone_append": false, 00:15:03.161 "compare": false, 00:15:03.161 "compare_and_write": false, 00:15:03.161 "abort": true, 00:15:03.161 "seek_hole": false, 00:15:03.161 "seek_data": false, 00:15:03.161 "copy": true, 00:15:03.161 "nvme_iov_md": false 00:15:03.161 }, 00:15:03.161 "memory_domains": [ 00:15:03.161 { 00:15:03.161 "dma_device_id": "system", 00:15:03.161 "dma_device_type": 1 00:15:03.161 }, 00:15:03.161 { 00:15:03.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.161 "dma_device_type": 2 00:15:03.161 } 00:15:03.161 ], 00:15:03.161 "driver_specific": {} 00:15:03.161 }' 00:15:03.161 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.161 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.161 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.161 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.161 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.478 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.478 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:03.478 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.737 "name": "BaseBdev4", 00:15:03.737 "aliases": [ 00:15:03.737 "20338318-22b2-4219-8607-bf71feba2592" 00:15:03.737 ], 00:15:03.737 "product_name": "Malloc disk", 00:15:03.737 "block_size": 512, 00:15:03.737 "num_blocks": 65536, 00:15:03.737 "uuid": "20338318-22b2-4219-8607-bf71feba2592", 00:15:03.737 "assigned_rate_limits": { 00:15:03.737 "rw_ios_per_sec": 0, 00:15:03.737 "rw_mbytes_per_sec": 0, 00:15:03.737 "r_mbytes_per_sec": 0, 00:15:03.737 "w_mbytes_per_sec": 0 00:15:03.737 }, 00:15:03.737 "claimed": true, 00:15:03.737 "claim_type": "exclusive_write", 00:15:03.737 "zoned": false, 00:15:03.737 "supported_io_types": { 00:15:03.737 "read": true, 00:15:03.737 "write": true, 00:15:03.737 "unmap": true, 00:15:03.737 "flush": true, 00:15:03.737 "reset": true, 00:15:03.737 "nvme_admin": false, 00:15:03.737 "nvme_io": false, 00:15:03.737 "nvme_io_md": false, 00:15:03.737 "write_zeroes": true, 00:15:03.737 "zcopy": true, 00:15:03.737 "get_zone_info": false, 00:15:03.737 "zone_management": false, 00:15:03.737 "zone_append": false, 00:15:03.737 "compare": false, 00:15:03.737 "compare_and_write": false, 00:15:03.737 "abort": true, 00:15:03.737 "seek_hole": false, 00:15:03.737 "seek_data": false, 00:15:03.737 "copy": true, 00:15:03.737 "nvme_iov_md": false 00:15:03.737 }, 00:15:03.737 "memory_domains": [ 00:15:03.737 { 00:15:03.737 "dma_device_id": "system", 00:15:03.737 "dma_device_type": 1 00:15:03.737 }, 00:15:03.737 { 00:15:03.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.737 "dma_device_type": 2 00:15:03.737 } 00:15:03.737 ], 00:15:03.737 "driver_specific": {} 00:15:03.737 }' 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.737 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:03.996 [2024-07-15 10:22:28.759979] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.996 [2024-07-15 10:22:28.760000] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.996 [2024-07-15 10:22:28.760038] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.996 [2024-07-15 10:22:28.760078] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.996 [2024-07-15 10:22:28.760086] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a9b6f0 name Existed_Raid, state offline 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1803881 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1803881 ']' 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1803881 00:15:03.996 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1803881 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1803881' 00:15:04.255 killing process with pid 1803881 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1803881 00:15:04.255 [2024-07-15 10:22:28.833045] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.255 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1803881 00:15:04.255 [2024-07-15 10:22:28.864547] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.255 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:04.255 00:15:04.255 real 0m24.365s 00:15:04.255 user 0m44.420s 00:15:04.255 sys 0m4.801s 00:15:04.255 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.255 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.255 ************************************ 00:15:04.255 END TEST raid_state_function_test 00:15:04.255 ************************************ 00:15:04.515 10:22:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:04.515 10:22:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:04.515 10:22:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:04.515 10:22:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.515 10:22:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.515 ************************************ 00:15:04.515 START TEST raid_state_function_test_sb 00:15:04.515 ************************************ 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1808659 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1808659' 00:15:04.515 Process raid pid: 1808659 00:15:04.515 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1808659 /var/tmp/spdk-raid.sock 00:15:04.516 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1808659 ']' 00:15:04.516 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:04.516 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.516 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:04.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:04.516 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.516 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.516 [2024-07-15 10:22:29.160939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:04.516 [2024-07-15 10:22:29.160989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:04.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:04.516 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:04.516 [2024-07-15 10:22:29.247887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.774 [2024-07-15 10:22:29.323674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.774 [2024-07-15 10:22:29.373502] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.774 [2024-07-15 10:22:29.373524] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.342 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.342 10:22:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:05.342 10:22:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:05.342 [2024-07-15 10:22:30.120586] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:05.342 [2024-07-15 10:22:30.120619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:05.342 [2024-07-15 10:22:30.120626] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:05.342 [2024-07-15 10:22:30.120634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:05.342 [2024-07-15 10:22:30.120640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:05.342 [2024-07-15 10:22:30.120647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:05.342 [2024-07-15 10:22:30.120652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:05.342 [2024-07-15 10:22:30.120659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.601 "name": "Existed_Raid", 00:15:05.601 "uuid": "3271d48e-68c7-4f72-817e-a33259b4449f", 00:15:05.601 "strip_size_kb": 64, 00:15:05.601 "state": "configuring", 00:15:05.601 "raid_level": "raid0", 00:15:05.601 "superblock": true, 00:15:05.601 "num_base_bdevs": 4, 00:15:05.601 "num_base_bdevs_discovered": 0, 00:15:05.601 "num_base_bdevs_operational": 4, 00:15:05.601 "base_bdevs_list": [ 00:15:05.601 { 00:15:05.601 "name": "BaseBdev1", 00:15:05.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.601 "is_configured": false, 00:15:05.601 "data_offset": 0, 00:15:05.601 "data_size": 0 00:15:05.601 }, 00:15:05.601 { 00:15:05.601 "name": "BaseBdev2", 00:15:05.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.601 "is_configured": false, 00:15:05.601 "data_offset": 0, 00:15:05.601 "data_size": 0 00:15:05.601 }, 00:15:05.601 { 00:15:05.601 "name": "BaseBdev3", 00:15:05.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.601 "is_configured": false, 00:15:05.601 "data_offset": 0, 00:15:05.601 "data_size": 0 00:15:05.601 }, 00:15:05.601 { 00:15:05.601 "name": "BaseBdev4", 00:15:05.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.601 "is_configured": false, 00:15:05.601 "data_offset": 0, 00:15:05.601 "data_size": 0 00:15:05.601 } 00:15:05.601 ] 00:15:05.601 }' 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.601 10:22:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.167 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.167 [2024-07-15 10:22:30.922542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.167 [2024-07-15 10:22:30.922562] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x221cf60 name Existed_Raid, state configuring 00:15:06.167 10:22:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:06.426 [2024-07-15 10:22:31.091005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.426 [2024-07-15 10:22:31.091025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.426 [2024-07-15 10:22:31.091031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.426 [2024-07-15 10:22:31.091038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.426 [2024-07-15 10:22:31.091043] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.426 [2024-07-15 10:22:31.091050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.426 [2024-07-15 10:22:31.091056] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:06.426 [2024-07-15 10:22:31.091063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:06.426 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.685 [2024-07-15 10:22:31.251689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.685 BaseBdev1 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.685 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.943 [ 00:15:06.943 { 00:15:06.943 "name": "BaseBdev1", 00:15:06.943 "aliases": [ 00:15:06.943 "7e51744a-d53b-4de9-83a7-a6007bf1e999" 00:15:06.943 ], 00:15:06.943 "product_name": "Malloc disk", 00:15:06.943 "block_size": 512, 00:15:06.943 "num_blocks": 65536, 00:15:06.943 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:06.943 "assigned_rate_limits": { 00:15:06.943 "rw_ios_per_sec": 0, 00:15:06.943 "rw_mbytes_per_sec": 0, 00:15:06.944 "r_mbytes_per_sec": 0, 00:15:06.944 "w_mbytes_per_sec": 0 00:15:06.944 }, 00:15:06.944 "claimed": true, 00:15:06.944 "claim_type": "exclusive_write", 00:15:06.944 "zoned": false, 00:15:06.944 "supported_io_types": { 00:15:06.944 "read": true, 00:15:06.944 "write": true, 00:15:06.944 "unmap": true, 00:15:06.944 "flush": true, 00:15:06.944 "reset": true, 00:15:06.944 "nvme_admin": false, 00:15:06.944 "nvme_io": false, 00:15:06.944 "nvme_io_md": false, 00:15:06.944 "write_zeroes": true, 00:15:06.944 "zcopy": true, 00:15:06.944 "get_zone_info": false, 00:15:06.944 "zone_management": false, 00:15:06.944 "zone_append": false, 00:15:06.944 "compare": false, 00:15:06.944 "compare_and_write": false, 00:15:06.944 "abort": true, 00:15:06.944 "seek_hole": false, 00:15:06.944 "seek_data": false, 00:15:06.944 "copy": true, 00:15:06.944 "nvme_iov_md": false 00:15:06.944 }, 00:15:06.944 "memory_domains": [ 00:15:06.944 { 00:15:06.944 "dma_device_id": "system", 00:15:06.944 "dma_device_type": 1 00:15:06.944 }, 00:15:06.944 { 00:15:06.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.944 "dma_device_type": 2 00:15:06.944 } 00:15:06.944 ], 00:15:06.944 "driver_specific": {} 00:15:06.944 } 00:15:06.944 ] 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.944 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.203 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.203 "name": "Existed_Raid", 00:15:07.203 "uuid": "041da234-b653-4a7e-b845-550a1f38d748", 00:15:07.203 "strip_size_kb": 64, 00:15:07.203 "state": "configuring", 00:15:07.203 "raid_level": "raid0", 00:15:07.203 "superblock": true, 00:15:07.203 "num_base_bdevs": 4, 00:15:07.203 "num_base_bdevs_discovered": 1, 00:15:07.203 "num_base_bdevs_operational": 4, 00:15:07.203 "base_bdevs_list": [ 00:15:07.203 { 00:15:07.203 "name": "BaseBdev1", 00:15:07.203 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:07.203 "is_configured": true, 00:15:07.203 "data_offset": 2048, 00:15:07.203 "data_size": 63488 00:15:07.203 }, 00:15:07.203 { 00:15:07.203 "name": "BaseBdev2", 00:15:07.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.203 "is_configured": false, 00:15:07.203 "data_offset": 0, 00:15:07.203 "data_size": 0 00:15:07.203 }, 00:15:07.203 { 00:15:07.203 "name": "BaseBdev3", 00:15:07.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.203 "is_configured": false, 00:15:07.203 "data_offset": 0, 00:15:07.203 "data_size": 0 00:15:07.203 }, 00:15:07.203 { 00:15:07.203 "name": "BaseBdev4", 00:15:07.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.203 "is_configured": false, 00:15:07.203 "data_offset": 0, 00:15:07.203 "data_size": 0 00:15:07.203 } 00:15:07.203 ] 00:15:07.203 }' 00:15:07.203 10:22:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.203 10:22:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.770 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:07.770 [2024-07-15 10:22:32.422702] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.770 [2024-07-15 10:22:32.422731] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x221c7d0 name Existed_Raid, state configuring 00:15:07.770 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:08.029 [2024-07-15 10:22:32.591170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.029 [2024-07-15 10:22:32.592234] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.029 [2024-07-15 10:22:32.592260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.029 [2024-07-15 10:22:32.592267] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.029 [2024-07-15 10:22:32.592274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.029 [2024-07-15 10:22:32.592280] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:08.029 [2024-07-15 10:22:32.592287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.029 "name": "Existed_Raid", 00:15:08.029 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:08.029 "strip_size_kb": 64, 00:15:08.029 "state": "configuring", 00:15:08.029 "raid_level": "raid0", 00:15:08.029 "superblock": true, 00:15:08.029 "num_base_bdevs": 4, 00:15:08.029 "num_base_bdevs_discovered": 1, 00:15:08.029 "num_base_bdevs_operational": 4, 00:15:08.029 "base_bdevs_list": [ 00:15:08.029 { 00:15:08.029 "name": "BaseBdev1", 00:15:08.029 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:08.029 "is_configured": true, 00:15:08.029 "data_offset": 2048, 00:15:08.029 "data_size": 63488 00:15:08.029 }, 00:15:08.029 { 00:15:08.029 "name": "BaseBdev2", 00:15:08.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.029 "is_configured": false, 00:15:08.029 "data_offset": 0, 00:15:08.029 "data_size": 0 00:15:08.029 }, 00:15:08.029 { 00:15:08.029 "name": "BaseBdev3", 00:15:08.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.029 "is_configured": false, 00:15:08.029 "data_offset": 0, 00:15:08.029 "data_size": 0 00:15:08.029 }, 00:15:08.029 { 00:15:08.029 "name": "BaseBdev4", 00:15:08.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.029 "is_configured": false, 00:15:08.029 "data_offset": 0, 00:15:08.029 "data_size": 0 00:15:08.029 } 00:15:08.029 ] 00:15:08.029 }' 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.029 10:22:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.595 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:08.853 [2024-07-15 10:22:33.411899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.853 BaseBdev2 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:08.853 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.110 [ 00:15:09.110 { 00:15:09.110 "name": "BaseBdev2", 00:15:09.111 "aliases": [ 00:15:09.111 "d386861a-d409-4eea-92ed-cad92250edb4" 00:15:09.111 ], 00:15:09.111 "product_name": "Malloc disk", 00:15:09.111 "block_size": 512, 00:15:09.111 "num_blocks": 65536, 00:15:09.111 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:09.111 "assigned_rate_limits": { 00:15:09.111 "rw_ios_per_sec": 0, 00:15:09.111 "rw_mbytes_per_sec": 0, 00:15:09.111 "r_mbytes_per_sec": 0, 00:15:09.111 "w_mbytes_per_sec": 0 00:15:09.111 }, 00:15:09.111 "claimed": true, 00:15:09.111 "claim_type": "exclusive_write", 00:15:09.111 "zoned": false, 00:15:09.111 "supported_io_types": { 00:15:09.111 "read": true, 00:15:09.111 "write": true, 00:15:09.111 "unmap": true, 00:15:09.111 "flush": true, 00:15:09.111 "reset": true, 00:15:09.111 "nvme_admin": false, 00:15:09.111 "nvme_io": false, 00:15:09.111 "nvme_io_md": false, 00:15:09.111 "write_zeroes": true, 00:15:09.111 "zcopy": true, 00:15:09.111 "get_zone_info": false, 00:15:09.111 "zone_management": false, 00:15:09.111 "zone_append": false, 00:15:09.111 "compare": false, 00:15:09.111 "compare_and_write": false, 00:15:09.111 "abort": true, 00:15:09.111 "seek_hole": false, 00:15:09.111 "seek_data": false, 00:15:09.111 "copy": true, 00:15:09.111 "nvme_iov_md": false 00:15:09.111 }, 00:15:09.111 "memory_domains": [ 00:15:09.111 { 00:15:09.111 "dma_device_id": "system", 00:15:09.111 "dma_device_type": 1 00:15:09.111 }, 00:15:09.111 { 00:15:09.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.111 "dma_device_type": 2 00:15:09.111 } 00:15:09.111 ], 00:15:09.111 "driver_specific": {} 00:15:09.111 } 00:15:09.111 ] 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.111 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.368 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.368 "name": "Existed_Raid", 00:15:09.368 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:09.368 "strip_size_kb": 64, 00:15:09.368 "state": "configuring", 00:15:09.368 "raid_level": "raid0", 00:15:09.368 "superblock": true, 00:15:09.368 "num_base_bdevs": 4, 00:15:09.368 "num_base_bdevs_discovered": 2, 00:15:09.368 "num_base_bdevs_operational": 4, 00:15:09.368 "base_bdevs_list": [ 00:15:09.368 { 00:15:09.368 "name": "BaseBdev1", 00:15:09.368 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:09.368 "is_configured": true, 00:15:09.368 "data_offset": 2048, 00:15:09.368 "data_size": 63488 00:15:09.368 }, 00:15:09.368 { 00:15:09.368 "name": "BaseBdev2", 00:15:09.368 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:09.368 "is_configured": true, 00:15:09.368 "data_offset": 2048, 00:15:09.368 "data_size": 63488 00:15:09.368 }, 00:15:09.368 { 00:15:09.368 "name": "BaseBdev3", 00:15:09.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.368 "is_configured": false, 00:15:09.368 "data_offset": 0, 00:15:09.368 "data_size": 0 00:15:09.368 }, 00:15:09.368 { 00:15:09.368 "name": "BaseBdev4", 00:15:09.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.368 "is_configured": false, 00:15:09.368 "data_offset": 0, 00:15:09.368 "data_size": 0 00:15:09.368 } 00:15:09.368 ] 00:15:09.368 }' 00:15:09.368 10:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.368 10:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.934 [2024-07-15 10:22:34.593700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.934 BaseBdev3 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.934 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:10.193 [ 00:15:10.193 { 00:15:10.193 "name": "BaseBdev3", 00:15:10.193 "aliases": [ 00:15:10.193 "4a340c4e-5eec-492c-b3dc-7a5bd07c4517" 00:15:10.193 ], 00:15:10.193 "product_name": "Malloc disk", 00:15:10.193 "block_size": 512, 00:15:10.193 "num_blocks": 65536, 00:15:10.193 "uuid": "4a340c4e-5eec-492c-b3dc-7a5bd07c4517", 00:15:10.193 "assigned_rate_limits": { 00:15:10.193 "rw_ios_per_sec": 0, 00:15:10.193 "rw_mbytes_per_sec": 0, 00:15:10.193 "r_mbytes_per_sec": 0, 00:15:10.193 "w_mbytes_per_sec": 0 00:15:10.193 }, 00:15:10.193 "claimed": true, 00:15:10.193 "claim_type": "exclusive_write", 00:15:10.193 "zoned": false, 00:15:10.193 "supported_io_types": { 00:15:10.193 "read": true, 00:15:10.193 "write": true, 00:15:10.193 "unmap": true, 00:15:10.193 "flush": true, 00:15:10.193 "reset": true, 00:15:10.193 "nvme_admin": false, 00:15:10.193 "nvme_io": false, 00:15:10.193 "nvme_io_md": false, 00:15:10.193 "write_zeroes": true, 00:15:10.193 "zcopy": true, 00:15:10.193 "get_zone_info": false, 00:15:10.193 "zone_management": false, 00:15:10.193 "zone_append": false, 00:15:10.193 "compare": false, 00:15:10.193 "compare_and_write": false, 00:15:10.193 "abort": true, 00:15:10.193 "seek_hole": false, 00:15:10.193 "seek_data": false, 00:15:10.193 "copy": true, 00:15:10.193 "nvme_iov_md": false 00:15:10.193 }, 00:15:10.193 "memory_domains": [ 00:15:10.193 { 00:15:10.193 "dma_device_id": "system", 00:15:10.193 "dma_device_type": 1 00:15:10.193 }, 00:15:10.193 { 00:15:10.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.193 "dma_device_type": 2 00:15:10.193 } 00:15:10.193 ], 00:15:10.193 "driver_specific": {} 00:15:10.193 } 00:15:10.193 ] 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.193 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.452 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.452 "name": "Existed_Raid", 00:15:10.452 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:10.452 "strip_size_kb": 64, 00:15:10.452 "state": "configuring", 00:15:10.452 "raid_level": "raid0", 00:15:10.452 "superblock": true, 00:15:10.452 "num_base_bdevs": 4, 00:15:10.452 "num_base_bdevs_discovered": 3, 00:15:10.452 "num_base_bdevs_operational": 4, 00:15:10.452 "base_bdevs_list": [ 00:15:10.452 { 00:15:10.452 "name": "BaseBdev1", 00:15:10.452 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:10.452 "is_configured": true, 00:15:10.452 "data_offset": 2048, 00:15:10.452 "data_size": 63488 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "BaseBdev2", 00:15:10.452 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:10.452 "is_configured": true, 00:15:10.452 "data_offset": 2048, 00:15:10.452 "data_size": 63488 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "BaseBdev3", 00:15:10.452 "uuid": "4a340c4e-5eec-492c-b3dc-7a5bd07c4517", 00:15:10.452 "is_configured": true, 00:15:10.452 "data_offset": 2048, 00:15:10.452 "data_size": 63488 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "BaseBdev4", 00:15:10.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.452 "is_configured": false, 00:15:10.452 "data_offset": 0, 00:15:10.452 "data_size": 0 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 }' 00:15:10.452 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.452 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.018 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:11.018 [2024-07-15 10:22:35.775561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:11.018 [2024-07-15 10:22:35.775680] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x221d830 00:15:11.018 [2024-07-15 10:22:35.775691] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:11.018 [2024-07-15 10:22:35.775813] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22141e0 00:15:11.018 [2024-07-15 10:22:35.775896] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x221d830 00:15:11.018 [2024-07-15 10:22:35.775911] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x221d830 00:15:11.018 [2024-07-15 10:22:35.775977] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.018 BaseBdev4 00:15:11.018 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:11.018 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:11.018 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.019 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:11.019 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.019 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.019 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.277 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:11.537 [ 00:15:11.537 { 00:15:11.537 "name": "BaseBdev4", 00:15:11.537 "aliases": [ 00:15:11.537 "629b5627-3812-4769-a117-7b92817db01b" 00:15:11.537 ], 00:15:11.537 "product_name": "Malloc disk", 00:15:11.537 "block_size": 512, 00:15:11.537 "num_blocks": 65536, 00:15:11.537 "uuid": "629b5627-3812-4769-a117-7b92817db01b", 00:15:11.537 "assigned_rate_limits": { 00:15:11.537 "rw_ios_per_sec": 0, 00:15:11.537 "rw_mbytes_per_sec": 0, 00:15:11.537 "r_mbytes_per_sec": 0, 00:15:11.537 "w_mbytes_per_sec": 0 00:15:11.537 }, 00:15:11.537 "claimed": true, 00:15:11.537 "claim_type": "exclusive_write", 00:15:11.537 "zoned": false, 00:15:11.537 "supported_io_types": { 00:15:11.537 "read": true, 00:15:11.537 "write": true, 00:15:11.537 "unmap": true, 00:15:11.537 "flush": true, 00:15:11.537 "reset": true, 00:15:11.537 "nvme_admin": false, 00:15:11.537 "nvme_io": false, 00:15:11.537 "nvme_io_md": false, 00:15:11.537 "write_zeroes": true, 00:15:11.537 "zcopy": true, 00:15:11.537 "get_zone_info": false, 00:15:11.537 "zone_management": false, 00:15:11.537 "zone_append": false, 00:15:11.537 "compare": false, 00:15:11.537 "compare_and_write": false, 00:15:11.537 "abort": true, 00:15:11.537 "seek_hole": false, 00:15:11.537 "seek_data": false, 00:15:11.537 "copy": true, 00:15:11.537 "nvme_iov_md": false 00:15:11.537 }, 00:15:11.537 "memory_domains": [ 00:15:11.537 { 00:15:11.537 "dma_device_id": "system", 00:15:11.537 "dma_device_type": 1 00:15:11.537 }, 00:15:11.537 { 00:15:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.537 "dma_device_type": 2 00:15:11.537 } 00:15:11.537 ], 00:15:11.537 "driver_specific": {} 00:15:11.537 } 00:15:11.537 ] 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.537 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.795 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.795 "name": "Existed_Raid", 00:15:11.795 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:11.795 "strip_size_kb": 64, 00:15:11.795 "state": "online", 00:15:11.795 "raid_level": "raid0", 00:15:11.795 "superblock": true, 00:15:11.795 "num_base_bdevs": 4, 00:15:11.795 "num_base_bdevs_discovered": 4, 00:15:11.795 "num_base_bdevs_operational": 4, 00:15:11.795 "base_bdevs_list": [ 00:15:11.796 { 00:15:11.796 "name": "BaseBdev1", 00:15:11.796 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:11.796 "is_configured": true, 00:15:11.796 "data_offset": 2048, 00:15:11.796 "data_size": 63488 00:15:11.796 }, 00:15:11.796 { 00:15:11.796 "name": "BaseBdev2", 00:15:11.796 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:11.796 "is_configured": true, 00:15:11.796 "data_offset": 2048, 00:15:11.796 "data_size": 63488 00:15:11.796 }, 00:15:11.796 { 00:15:11.796 "name": "BaseBdev3", 00:15:11.796 "uuid": "4a340c4e-5eec-492c-b3dc-7a5bd07c4517", 00:15:11.796 "is_configured": true, 00:15:11.796 "data_offset": 2048, 00:15:11.796 "data_size": 63488 00:15:11.796 }, 00:15:11.796 { 00:15:11.796 "name": "BaseBdev4", 00:15:11.796 "uuid": "629b5627-3812-4769-a117-7b92817db01b", 00:15:11.796 "is_configured": true, 00:15:11.796 "data_offset": 2048, 00:15:11.796 "data_size": 63488 00:15:11.796 } 00:15:11.796 ] 00:15:11.796 }' 00:15:11.796 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.796 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:12.054 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:12.312 [2024-07-15 10:22:36.966833] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.312 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:12.312 "name": "Existed_Raid", 00:15:12.312 "aliases": [ 00:15:12.312 "e68a8404-34d3-4cd2-a614-d307b7145ae0" 00:15:12.312 ], 00:15:12.312 "product_name": "Raid Volume", 00:15:12.312 "block_size": 512, 00:15:12.312 "num_blocks": 253952, 00:15:12.312 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:12.312 "assigned_rate_limits": { 00:15:12.312 "rw_ios_per_sec": 0, 00:15:12.312 "rw_mbytes_per_sec": 0, 00:15:12.312 "r_mbytes_per_sec": 0, 00:15:12.312 "w_mbytes_per_sec": 0 00:15:12.312 }, 00:15:12.312 "claimed": false, 00:15:12.312 "zoned": false, 00:15:12.312 "supported_io_types": { 00:15:12.312 "read": true, 00:15:12.312 "write": true, 00:15:12.312 "unmap": true, 00:15:12.312 "flush": true, 00:15:12.312 "reset": true, 00:15:12.312 "nvme_admin": false, 00:15:12.312 "nvme_io": false, 00:15:12.312 "nvme_io_md": false, 00:15:12.312 "write_zeroes": true, 00:15:12.312 "zcopy": false, 00:15:12.312 "get_zone_info": false, 00:15:12.312 "zone_management": false, 00:15:12.312 "zone_append": false, 00:15:12.312 "compare": false, 00:15:12.312 "compare_and_write": false, 00:15:12.312 "abort": false, 00:15:12.312 "seek_hole": false, 00:15:12.312 "seek_data": false, 00:15:12.312 "copy": false, 00:15:12.312 "nvme_iov_md": false 00:15:12.312 }, 00:15:12.312 "memory_domains": [ 00:15:12.312 { 00:15:12.312 "dma_device_id": "system", 00:15:12.312 "dma_device_type": 1 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.312 "dma_device_type": 2 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "system", 00:15:12.312 "dma_device_type": 1 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.312 "dma_device_type": 2 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "system", 00:15:12.312 "dma_device_type": 1 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.312 "dma_device_type": 2 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "system", 00:15:12.312 "dma_device_type": 1 00:15:12.312 }, 00:15:12.312 { 00:15:12.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.312 "dma_device_type": 2 00:15:12.312 } 00:15:12.312 ], 00:15:12.312 "driver_specific": { 00:15:12.312 "raid": { 00:15:12.312 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:12.312 "strip_size_kb": 64, 00:15:12.312 "state": "online", 00:15:12.312 "raid_level": "raid0", 00:15:12.312 "superblock": true, 00:15:12.312 "num_base_bdevs": 4, 00:15:12.312 "num_base_bdevs_discovered": 4, 00:15:12.313 "num_base_bdevs_operational": 4, 00:15:12.313 "base_bdevs_list": [ 00:15:12.313 { 00:15:12.313 "name": "BaseBdev1", 00:15:12.313 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:12.313 "is_configured": true, 00:15:12.313 "data_offset": 2048, 00:15:12.313 "data_size": 63488 00:15:12.313 }, 00:15:12.313 { 00:15:12.313 "name": "BaseBdev2", 00:15:12.313 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:12.313 "is_configured": true, 00:15:12.313 "data_offset": 2048, 00:15:12.313 "data_size": 63488 00:15:12.313 }, 00:15:12.313 { 00:15:12.313 "name": "BaseBdev3", 00:15:12.313 "uuid": "4a340c4e-5eec-492c-b3dc-7a5bd07c4517", 00:15:12.313 "is_configured": true, 00:15:12.313 "data_offset": 2048, 00:15:12.313 "data_size": 63488 00:15:12.313 }, 00:15:12.313 { 00:15:12.313 "name": "BaseBdev4", 00:15:12.313 "uuid": "629b5627-3812-4769-a117-7b92817db01b", 00:15:12.313 "is_configured": true, 00:15:12.313 "data_offset": 2048, 00:15:12.313 "data_size": 63488 00:15:12.313 } 00:15:12.313 ] 00:15:12.313 } 00:15:12.313 } 00:15:12.313 }' 00:15:12.313 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.313 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:12.313 BaseBdev2 00:15:12.313 BaseBdev3 00:15:12.313 BaseBdev4' 00:15:12.313 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.313 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:12.313 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:12.570 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:12.570 "name": "BaseBdev1", 00:15:12.570 "aliases": [ 00:15:12.570 "7e51744a-d53b-4de9-83a7-a6007bf1e999" 00:15:12.570 ], 00:15:12.570 "product_name": "Malloc disk", 00:15:12.570 "block_size": 512, 00:15:12.570 "num_blocks": 65536, 00:15:12.570 "uuid": "7e51744a-d53b-4de9-83a7-a6007bf1e999", 00:15:12.570 "assigned_rate_limits": { 00:15:12.570 "rw_ios_per_sec": 0, 00:15:12.570 "rw_mbytes_per_sec": 0, 00:15:12.570 "r_mbytes_per_sec": 0, 00:15:12.570 "w_mbytes_per_sec": 0 00:15:12.570 }, 00:15:12.570 "claimed": true, 00:15:12.570 "claim_type": "exclusive_write", 00:15:12.570 "zoned": false, 00:15:12.570 "supported_io_types": { 00:15:12.570 "read": true, 00:15:12.570 "write": true, 00:15:12.570 "unmap": true, 00:15:12.570 "flush": true, 00:15:12.570 "reset": true, 00:15:12.570 "nvme_admin": false, 00:15:12.570 "nvme_io": false, 00:15:12.570 "nvme_io_md": false, 00:15:12.571 "write_zeroes": true, 00:15:12.571 "zcopy": true, 00:15:12.571 "get_zone_info": false, 00:15:12.571 "zone_management": false, 00:15:12.571 "zone_append": false, 00:15:12.571 "compare": false, 00:15:12.571 "compare_and_write": false, 00:15:12.571 "abort": true, 00:15:12.571 "seek_hole": false, 00:15:12.571 "seek_data": false, 00:15:12.571 "copy": true, 00:15:12.571 "nvme_iov_md": false 00:15:12.571 }, 00:15:12.571 "memory_domains": [ 00:15:12.571 { 00:15:12.571 "dma_device_id": "system", 00:15:12.571 "dma_device_type": 1 00:15:12.571 }, 00:15:12.571 { 00:15:12.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.571 "dma_device_type": 2 00:15:12.571 } 00:15:12.571 ], 00:15:12.571 "driver_specific": {} 00:15:12.571 }' 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:12.571 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:12.829 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.087 "name": "BaseBdev2", 00:15:13.087 "aliases": [ 00:15:13.087 "d386861a-d409-4eea-92ed-cad92250edb4" 00:15:13.087 ], 00:15:13.087 "product_name": "Malloc disk", 00:15:13.087 "block_size": 512, 00:15:13.087 "num_blocks": 65536, 00:15:13.087 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:13.087 "assigned_rate_limits": { 00:15:13.087 "rw_ios_per_sec": 0, 00:15:13.087 "rw_mbytes_per_sec": 0, 00:15:13.087 "r_mbytes_per_sec": 0, 00:15:13.087 "w_mbytes_per_sec": 0 00:15:13.087 }, 00:15:13.087 "claimed": true, 00:15:13.087 "claim_type": "exclusive_write", 00:15:13.087 "zoned": false, 00:15:13.087 "supported_io_types": { 00:15:13.087 "read": true, 00:15:13.087 "write": true, 00:15:13.087 "unmap": true, 00:15:13.087 "flush": true, 00:15:13.087 "reset": true, 00:15:13.087 "nvme_admin": false, 00:15:13.087 "nvme_io": false, 00:15:13.087 "nvme_io_md": false, 00:15:13.087 "write_zeroes": true, 00:15:13.087 "zcopy": true, 00:15:13.087 "get_zone_info": false, 00:15:13.087 "zone_management": false, 00:15:13.087 "zone_append": false, 00:15:13.087 "compare": false, 00:15:13.087 "compare_and_write": false, 00:15:13.087 "abort": true, 00:15:13.087 "seek_hole": false, 00:15:13.087 "seek_data": false, 00:15:13.087 "copy": true, 00:15:13.087 "nvme_iov_md": false 00:15:13.087 }, 00:15:13.087 "memory_domains": [ 00:15:13.087 { 00:15:13.087 "dma_device_id": "system", 00:15:13.087 "dma_device_type": 1 00:15:13.087 }, 00:15:13.087 { 00:15:13.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.087 "dma_device_type": 2 00:15:13.087 } 00:15:13.087 ], 00:15:13.087 "driver_specific": {} 00:15:13.087 }' 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.087 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:13.345 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.603 "name": "BaseBdev3", 00:15:13.603 "aliases": [ 00:15:13.603 "4a340c4e-5eec-492c-b3dc-7a5bd07c4517" 00:15:13.603 ], 00:15:13.603 "product_name": "Malloc disk", 00:15:13.603 "block_size": 512, 00:15:13.603 "num_blocks": 65536, 00:15:13.603 "uuid": "4a340c4e-5eec-492c-b3dc-7a5bd07c4517", 00:15:13.603 "assigned_rate_limits": { 00:15:13.603 "rw_ios_per_sec": 0, 00:15:13.603 "rw_mbytes_per_sec": 0, 00:15:13.603 "r_mbytes_per_sec": 0, 00:15:13.603 "w_mbytes_per_sec": 0 00:15:13.603 }, 00:15:13.603 "claimed": true, 00:15:13.603 "claim_type": "exclusive_write", 00:15:13.603 "zoned": false, 00:15:13.603 "supported_io_types": { 00:15:13.603 "read": true, 00:15:13.603 "write": true, 00:15:13.603 "unmap": true, 00:15:13.603 "flush": true, 00:15:13.603 "reset": true, 00:15:13.603 "nvme_admin": false, 00:15:13.603 "nvme_io": false, 00:15:13.603 "nvme_io_md": false, 00:15:13.603 "write_zeroes": true, 00:15:13.603 "zcopy": true, 00:15:13.603 "get_zone_info": false, 00:15:13.603 "zone_management": false, 00:15:13.603 "zone_append": false, 00:15:13.603 "compare": false, 00:15:13.603 "compare_and_write": false, 00:15:13.603 "abort": true, 00:15:13.603 "seek_hole": false, 00:15:13.603 "seek_data": false, 00:15:13.603 "copy": true, 00:15:13.603 "nvme_iov_md": false 00:15:13.603 }, 00:15:13.603 "memory_domains": [ 00:15:13.603 { 00:15:13.603 "dma_device_id": "system", 00:15:13.603 "dma_device_type": 1 00:15:13.603 }, 00:15:13.603 { 00:15:13.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.603 "dma_device_type": 2 00:15:13.603 } 00:15:13.603 ], 00:15:13.603 "driver_specific": {} 00:15:13.603 }' 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:13.603 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.861 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.861 "name": "BaseBdev4", 00:15:13.861 "aliases": [ 00:15:13.861 "629b5627-3812-4769-a117-7b92817db01b" 00:15:13.861 ], 00:15:13.861 "product_name": "Malloc disk", 00:15:13.861 "block_size": 512, 00:15:13.861 "num_blocks": 65536, 00:15:13.861 "uuid": "629b5627-3812-4769-a117-7b92817db01b", 00:15:13.861 "assigned_rate_limits": { 00:15:13.861 "rw_ios_per_sec": 0, 00:15:13.861 "rw_mbytes_per_sec": 0, 00:15:13.861 "r_mbytes_per_sec": 0, 00:15:13.861 "w_mbytes_per_sec": 0 00:15:13.861 }, 00:15:13.861 "claimed": true, 00:15:13.861 "claim_type": "exclusive_write", 00:15:13.861 "zoned": false, 00:15:13.861 "supported_io_types": { 00:15:13.861 "read": true, 00:15:13.861 "write": true, 00:15:13.861 "unmap": true, 00:15:13.861 "flush": true, 00:15:13.861 "reset": true, 00:15:13.861 "nvme_admin": false, 00:15:13.861 "nvme_io": false, 00:15:13.861 "nvme_io_md": false, 00:15:13.861 "write_zeroes": true, 00:15:13.861 "zcopy": true, 00:15:13.861 "get_zone_info": false, 00:15:13.861 "zone_management": false, 00:15:13.861 "zone_append": false, 00:15:13.861 "compare": false, 00:15:13.861 "compare_and_write": false, 00:15:13.861 "abort": true, 00:15:13.861 "seek_hole": false, 00:15:13.861 "seek_data": false, 00:15:13.861 "copy": true, 00:15:13.861 "nvme_iov_md": false 00:15:13.861 }, 00:15:13.861 "memory_domains": [ 00:15:13.861 { 00:15:13.861 "dma_device_id": "system", 00:15:13.861 "dma_device_type": 1 00:15:13.861 }, 00:15:13.861 { 00:15:13.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.861 "dma_device_type": 2 00:15:13.861 } 00:15:13.861 ], 00:15:13.861 "driver_specific": {} 00:15:13.861 }' 00:15:13.861 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.861 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.861 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:13.861 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:13.861 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.119 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:14.378 [2024-07-15 10:22:38.935709] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.378 [2024-07-15 10:22:38.935728] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.378 [2024-07-15 10:22:38.935760] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.378 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.378 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.378 "name": "Existed_Raid", 00:15:14.378 "uuid": "e68a8404-34d3-4cd2-a614-d307b7145ae0", 00:15:14.378 "strip_size_kb": 64, 00:15:14.378 "state": "offline", 00:15:14.378 "raid_level": "raid0", 00:15:14.378 "superblock": true, 00:15:14.378 "num_base_bdevs": 4, 00:15:14.378 "num_base_bdevs_discovered": 3, 00:15:14.378 "num_base_bdevs_operational": 3, 00:15:14.378 "base_bdevs_list": [ 00:15:14.379 { 00:15:14.379 "name": null, 00:15:14.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.379 "is_configured": false, 00:15:14.379 "data_offset": 2048, 00:15:14.379 "data_size": 63488 00:15:14.379 }, 00:15:14.379 { 00:15:14.379 "name": "BaseBdev2", 00:15:14.379 "uuid": "d386861a-d409-4eea-92ed-cad92250edb4", 00:15:14.379 "is_configured": true, 00:15:14.379 "data_offset": 2048, 00:15:14.379 "data_size": 63488 00:15:14.379 }, 00:15:14.379 { 00:15:14.379 "name": "BaseBdev3", 00:15:14.379 "uuid": "4a340c4e-5eec-492c-b3dc-7a5bd07c4517", 00:15:14.379 "is_configured": true, 00:15:14.379 "data_offset": 2048, 00:15:14.379 "data_size": 63488 00:15:14.379 }, 00:15:14.379 { 00:15:14.379 "name": "BaseBdev4", 00:15:14.379 "uuid": "629b5627-3812-4769-a117-7b92817db01b", 00:15:14.379 "is_configured": true, 00:15:14.379 "data_offset": 2048, 00:15:14.379 "data_size": 63488 00:15:14.379 } 00:15:14.379 ] 00:15:14.379 }' 00:15:14.379 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.379 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.945 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:14.945 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:14.945 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.945 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:15.204 [2024-07-15 10:22:39.903093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.204 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.462 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.462 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.462 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:15.720 [2024-07-15 10:22:40.273281] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.720 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:15.978 [2024-07-15 10:22:40.623392] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:15.978 [2024-07-15 10:22:40.623422] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x221d830 name Existed_Raid, state offline 00:15:15.978 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:15.978 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:15.978 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.978 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.237 BaseBdev2 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.237 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.496 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.496 [ 00:15:16.496 { 00:15:16.496 "name": "BaseBdev2", 00:15:16.496 "aliases": [ 00:15:16.496 "f30718ce-8dcc-48f1-a708-7357bb813d1d" 00:15:16.496 ], 00:15:16.496 "product_name": "Malloc disk", 00:15:16.496 "block_size": 512, 00:15:16.496 "num_blocks": 65536, 00:15:16.496 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:16.496 "assigned_rate_limits": { 00:15:16.496 "rw_ios_per_sec": 0, 00:15:16.496 "rw_mbytes_per_sec": 0, 00:15:16.496 "r_mbytes_per_sec": 0, 00:15:16.496 "w_mbytes_per_sec": 0 00:15:16.496 }, 00:15:16.496 "claimed": false, 00:15:16.496 "zoned": false, 00:15:16.496 "supported_io_types": { 00:15:16.496 "read": true, 00:15:16.496 "write": true, 00:15:16.496 "unmap": true, 00:15:16.496 "flush": true, 00:15:16.496 "reset": true, 00:15:16.496 "nvme_admin": false, 00:15:16.496 "nvme_io": false, 00:15:16.496 "nvme_io_md": false, 00:15:16.496 "write_zeroes": true, 00:15:16.496 "zcopy": true, 00:15:16.496 "get_zone_info": false, 00:15:16.496 "zone_management": false, 00:15:16.496 "zone_append": false, 00:15:16.496 "compare": false, 00:15:16.496 "compare_and_write": false, 00:15:16.496 "abort": true, 00:15:16.496 "seek_hole": false, 00:15:16.496 "seek_data": false, 00:15:16.496 "copy": true, 00:15:16.496 "nvme_iov_md": false 00:15:16.496 }, 00:15:16.496 "memory_domains": [ 00:15:16.496 { 00:15:16.496 "dma_device_id": "system", 00:15:16.496 "dma_device_type": 1 00:15:16.496 }, 00:15:16.496 { 00:15:16.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.496 "dma_device_type": 2 00:15:16.496 } 00:15:16.496 ], 00:15:16.496 "driver_specific": {} 00:15:16.496 } 00:15:16.496 ] 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.754 BaseBdev3 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:16.754 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.013 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:17.013 [ 00:15:17.013 { 00:15:17.013 "name": "BaseBdev3", 00:15:17.013 "aliases": [ 00:15:17.013 "8e7a351c-541e-4bc9-b297-25c1470bdda8" 00:15:17.013 ], 00:15:17.013 "product_name": "Malloc disk", 00:15:17.013 "block_size": 512, 00:15:17.013 "num_blocks": 65536, 00:15:17.013 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:17.013 "assigned_rate_limits": { 00:15:17.013 "rw_ios_per_sec": 0, 00:15:17.013 "rw_mbytes_per_sec": 0, 00:15:17.013 "r_mbytes_per_sec": 0, 00:15:17.013 "w_mbytes_per_sec": 0 00:15:17.013 }, 00:15:17.013 "claimed": false, 00:15:17.013 "zoned": false, 00:15:17.013 "supported_io_types": { 00:15:17.013 "read": true, 00:15:17.013 "write": true, 00:15:17.013 "unmap": true, 00:15:17.013 "flush": true, 00:15:17.013 "reset": true, 00:15:17.013 "nvme_admin": false, 00:15:17.013 "nvme_io": false, 00:15:17.013 "nvme_io_md": false, 00:15:17.013 "write_zeroes": true, 00:15:17.013 "zcopy": true, 00:15:17.013 "get_zone_info": false, 00:15:17.013 "zone_management": false, 00:15:17.013 "zone_append": false, 00:15:17.013 "compare": false, 00:15:17.013 "compare_and_write": false, 00:15:17.013 "abort": true, 00:15:17.013 "seek_hole": false, 00:15:17.013 "seek_data": false, 00:15:17.013 "copy": true, 00:15:17.013 "nvme_iov_md": false 00:15:17.013 }, 00:15:17.013 "memory_domains": [ 00:15:17.013 { 00:15:17.013 "dma_device_id": "system", 00:15:17.013 "dma_device_type": 1 00:15:17.013 }, 00:15:17.013 { 00:15:17.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.013 "dma_device_type": 2 00:15:17.013 } 00:15:17.013 ], 00:15:17.013 "driver_specific": {} 00:15:17.013 } 00:15:17.013 ] 00:15:17.013 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:17.013 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:17.013 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.013 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:17.302 BaseBdev4 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.302 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:17.560 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:17.560 [ 00:15:17.560 { 00:15:17.560 "name": "BaseBdev4", 00:15:17.560 "aliases": [ 00:15:17.560 "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf" 00:15:17.560 ], 00:15:17.560 "product_name": "Malloc disk", 00:15:17.560 "block_size": 512, 00:15:17.560 "num_blocks": 65536, 00:15:17.560 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:17.560 "assigned_rate_limits": { 00:15:17.560 "rw_ios_per_sec": 0, 00:15:17.560 "rw_mbytes_per_sec": 0, 00:15:17.560 "r_mbytes_per_sec": 0, 00:15:17.560 "w_mbytes_per_sec": 0 00:15:17.560 }, 00:15:17.560 "claimed": false, 00:15:17.560 "zoned": false, 00:15:17.560 "supported_io_types": { 00:15:17.560 "read": true, 00:15:17.560 "write": true, 00:15:17.560 "unmap": true, 00:15:17.560 "flush": true, 00:15:17.560 "reset": true, 00:15:17.560 "nvme_admin": false, 00:15:17.560 "nvme_io": false, 00:15:17.560 "nvme_io_md": false, 00:15:17.560 "write_zeroes": true, 00:15:17.560 "zcopy": true, 00:15:17.560 "get_zone_info": false, 00:15:17.560 "zone_management": false, 00:15:17.560 "zone_append": false, 00:15:17.560 "compare": false, 00:15:17.560 "compare_and_write": false, 00:15:17.560 "abort": true, 00:15:17.560 "seek_hole": false, 00:15:17.560 "seek_data": false, 00:15:17.560 "copy": true, 00:15:17.560 "nvme_iov_md": false 00:15:17.560 }, 00:15:17.560 "memory_domains": [ 00:15:17.560 { 00:15:17.560 "dma_device_id": "system", 00:15:17.560 "dma_device_type": 1 00:15:17.560 }, 00:15:17.560 { 00:15:17.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.560 "dma_device_type": 2 00:15:17.560 } 00:15:17.560 ], 00:15:17.560 "driver_specific": {} 00:15:17.560 } 00:15:17.560 ] 00:15:17.560 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:17.560 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:17.560 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:17.560 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:17.818 [2024-07-15 10:22:42.405137] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.818 [2024-07-15 10:22:42.405168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.818 [2024-07-15 10:22:42.405180] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.818 [2024-07-15 10:22:42.406099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.818 [2024-07-15 10:22:42.406140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.818 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.818 "name": "Existed_Raid", 00:15:17.818 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:17.818 "strip_size_kb": 64, 00:15:17.818 "state": "configuring", 00:15:17.818 "raid_level": "raid0", 00:15:17.818 "superblock": true, 00:15:17.818 "num_base_bdevs": 4, 00:15:17.818 "num_base_bdevs_discovered": 3, 00:15:17.818 "num_base_bdevs_operational": 4, 00:15:17.818 "base_bdevs_list": [ 00:15:17.818 { 00:15:17.818 "name": "BaseBdev1", 00:15:17.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.818 "is_configured": false, 00:15:17.818 "data_offset": 0, 00:15:17.818 "data_size": 0 00:15:17.818 }, 00:15:17.818 { 00:15:17.818 "name": "BaseBdev2", 00:15:17.818 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:17.818 "is_configured": true, 00:15:17.818 "data_offset": 2048, 00:15:17.818 "data_size": 63488 00:15:17.818 }, 00:15:17.818 { 00:15:17.818 "name": "BaseBdev3", 00:15:17.818 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:17.818 "is_configured": true, 00:15:17.818 "data_offset": 2048, 00:15:17.818 "data_size": 63488 00:15:17.818 }, 00:15:17.819 { 00:15:17.819 "name": "BaseBdev4", 00:15:17.819 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:17.819 "is_configured": true, 00:15:17.819 "data_offset": 2048, 00:15:17.819 "data_size": 63488 00:15:17.819 } 00:15:17.819 ] 00:15:17.819 }' 00:15:17.819 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.819 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.385 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:18.642 [2024-07-15 10:22:43.219211] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.642 "name": "Existed_Raid", 00:15:18.642 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:18.642 "strip_size_kb": 64, 00:15:18.642 "state": "configuring", 00:15:18.642 "raid_level": "raid0", 00:15:18.642 "superblock": true, 00:15:18.642 "num_base_bdevs": 4, 00:15:18.642 "num_base_bdevs_discovered": 2, 00:15:18.642 "num_base_bdevs_operational": 4, 00:15:18.642 "base_bdevs_list": [ 00:15:18.642 { 00:15:18.642 "name": "BaseBdev1", 00:15:18.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.642 "is_configured": false, 00:15:18.642 "data_offset": 0, 00:15:18.642 "data_size": 0 00:15:18.642 }, 00:15:18.642 { 00:15:18.642 "name": null, 00:15:18.642 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:18.642 "is_configured": false, 00:15:18.642 "data_offset": 2048, 00:15:18.642 "data_size": 63488 00:15:18.642 }, 00:15:18.642 { 00:15:18.642 "name": "BaseBdev3", 00:15:18.642 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:18.642 "is_configured": true, 00:15:18.642 "data_offset": 2048, 00:15:18.642 "data_size": 63488 00:15:18.642 }, 00:15:18.642 { 00:15:18.642 "name": "BaseBdev4", 00:15:18.642 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:18.642 "is_configured": true, 00:15:18.642 "data_offset": 2048, 00:15:18.642 "data_size": 63488 00:15:18.642 } 00:15:18.642 ] 00:15:18.642 }' 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.642 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.208 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.208 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.465 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:19.465 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.724 [2024-07-15 10:22:44.268677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.724 BaseBdev1 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.724 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.997 [ 00:15:19.997 { 00:15:19.997 "name": "BaseBdev1", 00:15:19.997 "aliases": [ 00:15:19.997 "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82" 00:15:19.997 ], 00:15:19.997 "product_name": "Malloc disk", 00:15:19.997 "block_size": 512, 00:15:19.997 "num_blocks": 65536, 00:15:19.997 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:19.997 "assigned_rate_limits": { 00:15:19.997 "rw_ios_per_sec": 0, 00:15:19.997 "rw_mbytes_per_sec": 0, 00:15:19.997 "r_mbytes_per_sec": 0, 00:15:19.997 "w_mbytes_per_sec": 0 00:15:19.997 }, 00:15:19.997 "claimed": true, 00:15:19.997 "claim_type": "exclusive_write", 00:15:19.997 "zoned": false, 00:15:19.997 "supported_io_types": { 00:15:19.997 "read": true, 00:15:19.997 "write": true, 00:15:19.997 "unmap": true, 00:15:19.997 "flush": true, 00:15:19.997 "reset": true, 00:15:19.997 "nvme_admin": false, 00:15:19.997 "nvme_io": false, 00:15:19.997 "nvme_io_md": false, 00:15:19.997 "write_zeroes": true, 00:15:19.997 "zcopy": true, 00:15:19.997 "get_zone_info": false, 00:15:19.997 "zone_management": false, 00:15:19.997 "zone_append": false, 00:15:19.997 "compare": false, 00:15:19.997 "compare_and_write": false, 00:15:19.997 "abort": true, 00:15:19.997 "seek_hole": false, 00:15:19.997 "seek_data": false, 00:15:19.997 "copy": true, 00:15:19.997 "nvme_iov_md": false 00:15:19.997 }, 00:15:19.997 "memory_domains": [ 00:15:19.997 { 00:15:19.997 "dma_device_id": "system", 00:15:19.997 "dma_device_type": 1 00:15:19.997 }, 00:15:19.997 { 00:15:19.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.997 "dma_device_type": 2 00:15:19.997 } 00:15:19.997 ], 00:15:19.997 "driver_specific": {} 00:15:19.997 } 00:15:19.997 ] 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.997 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.254 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.254 "name": "Existed_Raid", 00:15:20.254 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:20.254 "strip_size_kb": 64, 00:15:20.254 "state": "configuring", 00:15:20.254 "raid_level": "raid0", 00:15:20.254 "superblock": true, 00:15:20.254 "num_base_bdevs": 4, 00:15:20.254 "num_base_bdevs_discovered": 3, 00:15:20.254 "num_base_bdevs_operational": 4, 00:15:20.254 "base_bdevs_list": [ 00:15:20.254 { 00:15:20.254 "name": "BaseBdev1", 00:15:20.254 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:20.254 "is_configured": true, 00:15:20.254 "data_offset": 2048, 00:15:20.254 "data_size": 63488 00:15:20.254 }, 00:15:20.254 { 00:15:20.254 "name": null, 00:15:20.254 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:20.254 "is_configured": false, 00:15:20.254 "data_offset": 2048, 00:15:20.254 "data_size": 63488 00:15:20.254 }, 00:15:20.254 { 00:15:20.254 "name": "BaseBdev3", 00:15:20.254 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:20.254 "is_configured": true, 00:15:20.254 "data_offset": 2048, 00:15:20.254 "data_size": 63488 00:15:20.254 }, 00:15:20.254 { 00:15:20.254 "name": "BaseBdev4", 00:15:20.254 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:20.254 "is_configured": true, 00:15:20.254 "data_offset": 2048, 00:15:20.254 "data_size": 63488 00:15:20.254 } 00:15:20.254 ] 00:15:20.254 }' 00:15:20.254 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.254 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.511 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.511 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.768 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:20.769 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:21.025 [2024-07-15 10:22:45.596110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.025 "name": "Existed_Raid", 00:15:21.025 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:21.025 "strip_size_kb": 64, 00:15:21.025 "state": "configuring", 00:15:21.025 "raid_level": "raid0", 00:15:21.025 "superblock": true, 00:15:21.025 "num_base_bdevs": 4, 00:15:21.025 "num_base_bdevs_discovered": 2, 00:15:21.025 "num_base_bdevs_operational": 4, 00:15:21.025 "base_bdevs_list": [ 00:15:21.025 { 00:15:21.025 "name": "BaseBdev1", 00:15:21.025 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:21.025 "is_configured": true, 00:15:21.025 "data_offset": 2048, 00:15:21.025 "data_size": 63488 00:15:21.025 }, 00:15:21.025 { 00:15:21.025 "name": null, 00:15:21.025 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:21.025 "is_configured": false, 00:15:21.025 "data_offset": 2048, 00:15:21.025 "data_size": 63488 00:15:21.025 }, 00:15:21.025 { 00:15:21.025 "name": null, 00:15:21.025 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:21.025 "is_configured": false, 00:15:21.025 "data_offset": 2048, 00:15:21.025 "data_size": 63488 00:15:21.025 }, 00:15:21.025 { 00:15:21.025 "name": "BaseBdev4", 00:15:21.025 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:21.025 "is_configured": true, 00:15:21.025 "data_offset": 2048, 00:15:21.025 "data_size": 63488 00:15:21.025 } 00:15:21.025 ] 00:15:21.025 }' 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.025 10:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.589 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.589 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:21.847 [2024-07-15 10:22:46.594697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.847 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.103 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.103 "name": "Existed_Raid", 00:15:22.103 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:22.103 "strip_size_kb": 64, 00:15:22.103 "state": "configuring", 00:15:22.103 "raid_level": "raid0", 00:15:22.103 "superblock": true, 00:15:22.103 "num_base_bdevs": 4, 00:15:22.103 "num_base_bdevs_discovered": 3, 00:15:22.103 "num_base_bdevs_operational": 4, 00:15:22.103 "base_bdevs_list": [ 00:15:22.103 { 00:15:22.103 "name": "BaseBdev1", 00:15:22.103 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:22.103 "is_configured": true, 00:15:22.103 "data_offset": 2048, 00:15:22.103 "data_size": 63488 00:15:22.103 }, 00:15:22.103 { 00:15:22.103 "name": null, 00:15:22.103 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:22.103 "is_configured": false, 00:15:22.103 "data_offset": 2048, 00:15:22.103 "data_size": 63488 00:15:22.103 }, 00:15:22.103 { 00:15:22.103 "name": "BaseBdev3", 00:15:22.103 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:22.103 "is_configured": true, 00:15:22.103 "data_offset": 2048, 00:15:22.103 "data_size": 63488 00:15:22.103 }, 00:15:22.103 { 00:15:22.103 "name": "BaseBdev4", 00:15:22.103 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:22.103 "is_configured": true, 00:15:22.103 "data_offset": 2048, 00:15:22.103 "data_size": 63488 00:15:22.103 } 00:15:22.103 ] 00:15:22.103 }' 00:15:22.103 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.103 10:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.667 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.667 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:22.667 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:22.667 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:22.925 [2024-07-15 10:22:47.565216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.925 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.183 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:23.183 "name": "Existed_Raid", 00:15:23.183 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:23.183 "strip_size_kb": 64, 00:15:23.183 "state": "configuring", 00:15:23.183 "raid_level": "raid0", 00:15:23.183 "superblock": true, 00:15:23.183 "num_base_bdevs": 4, 00:15:23.183 "num_base_bdevs_discovered": 2, 00:15:23.183 "num_base_bdevs_operational": 4, 00:15:23.183 "base_bdevs_list": [ 00:15:23.183 { 00:15:23.183 "name": null, 00:15:23.183 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:23.183 "is_configured": false, 00:15:23.183 "data_offset": 2048, 00:15:23.183 "data_size": 63488 00:15:23.183 }, 00:15:23.183 { 00:15:23.183 "name": null, 00:15:23.183 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:23.183 "is_configured": false, 00:15:23.183 "data_offset": 2048, 00:15:23.183 "data_size": 63488 00:15:23.183 }, 00:15:23.183 { 00:15:23.183 "name": "BaseBdev3", 00:15:23.183 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:23.183 "is_configured": true, 00:15:23.184 "data_offset": 2048, 00:15:23.184 "data_size": 63488 00:15:23.184 }, 00:15:23.184 { 00:15:23.184 "name": "BaseBdev4", 00:15:23.184 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:23.184 "is_configured": true, 00:15:23.184 "data_offset": 2048, 00:15:23.184 "data_size": 63488 00:15:23.184 } 00:15:23.184 ] 00:15:23.184 }' 00:15:23.184 10:22:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:23.184 10:22:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.750 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.750 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:23.750 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:23.750 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:24.010 [2024-07-15 10:22:48.577484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.010 "name": "Existed_Raid", 00:15:24.010 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:24.010 "strip_size_kb": 64, 00:15:24.010 "state": "configuring", 00:15:24.010 "raid_level": "raid0", 00:15:24.010 "superblock": true, 00:15:24.010 "num_base_bdevs": 4, 00:15:24.010 "num_base_bdevs_discovered": 3, 00:15:24.010 "num_base_bdevs_operational": 4, 00:15:24.010 "base_bdevs_list": [ 00:15:24.010 { 00:15:24.010 "name": null, 00:15:24.010 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:24.010 "is_configured": false, 00:15:24.010 "data_offset": 2048, 00:15:24.010 "data_size": 63488 00:15:24.010 }, 00:15:24.010 { 00:15:24.010 "name": "BaseBdev2", 00:15:24.010 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:24.010 "is_configured": true, 00:15:24.010 "data_offset": 2048, 00:15:24.010 "data_size": 63488 00:15:24.010 }, 00:15:24.010 { 00:15:24.010 "name": "BaseBdev3", 00:15:24.010 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:24.010 "is_configured": true, 00:15:24.010 "data_offset": 2048, 00:15:24.010 "data_size": 63488 00:15:24.010 }, 00:15:24.010 { 00:15:24.010 "name": "BaseBdev4", 00:15:24.010 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:24.010 "is_configured": true, 00:15:24.010 "data_offset": 2048, 00:15:24.010 "data_size": 63488 00:15:24.010 } 00:15:24.010 ] 00:15:24.010 }' 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.010 10:22:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.577 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.577 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:24.836 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:24.836 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.836 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:24.836 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b2faadfa-b999-4cb8-9fae-8cc2bcddcb82 00:15:25.095 [2024-07-15 10:22:49.715191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:25.095 [2024-07-15 10:22:49.715317] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2213c90 00:15:25.095 [2024-07-15 10:22:49.715325] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:25.095 [2024-07-15 10:22:49.715451] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2214d40 00:15:25.095 [2024-07-15 10:22:49.715526] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2213c90 00:15:25.095 [2024-07-15 10:22:49.715533] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2213c90 00:15:25.095 [2024-07-15 10:22:49.715591] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.095 NewBaseBdev 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.095 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.354 10:22:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:25.354 [ 00:15:25.354 { 00:15:25.354 "name": "NewBaseBdev", 00:15:25.354 "aliases": [ 00:15:25.354 "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82" 00:15:25.354 ], 00:15:25.355 "product_name": "Malloc disk", 00:15:25.355 "block_size": 512, 00:15:25.355 "num_blocks": 65536, 00:15:25.355 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:25.355 "assigned_rate_limits": { 00:15:25.355 "rw_ios_per_sec": 0, 00:15:25.355 "rw_mbytes_per_sec": 0, 00:15:25.355 "r_mbytes_per_sec": 0, 00:15:25.355 "w_mbytes_per_sec": 0 00:15:25.355 }, 00:15:25.355 "claimed": true, 00:15:25.355 "claim_type": "exclusive_write", 00:15:25.355 "zoned": false, 00:15:25.355 "supported_io_types": { 00:15:25.355 "read": true, 00:15:25.355 "write": true, 00:15:25.355 "unmap": true, 00:15:25.355 "flush": true, 00:15:25.355 "reset": true, 00:15:25.355 "nvme_admin": false, 00:15:25.355 "nvme_io": false, 00:15:25.355 "nvme_io_md": false, 00:15:25.355 "write_zeroes": true, 00:15:25.355 "zcopy": true, 00:15:25.355 "get_zone_info": false, 00:15:25.355 "zone_management": false, 00:15:25.355 "zone_append": false, 00:15:25.355 "compare": false, 00:15:25.355 "compare_and_write": false, 00:15:25.355 "abort": true, 00:15:25.355 "seek_hole": false, 00:15:25.355 "seek_data": false, 00:15:25.355 "copy": true, 00:15:25.355 "nvme_iov_md": false 00:15:25.355 }, 00:15:25.355 "memory_domains": [ 00:15:25.355 { 00:15:25.355 "dma_device_id": "system", 00:15:25.355 "dma_device_type": 1 00:15:25.355 }, 00:15:25.355 { 00:15:25.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.355 "dma_device_type": 2 00:15:25.355 } 00:15:25.355 ], 00:15:25.355 "driver_specific": {} 00:15:25.355 } 00:15:25.355 ] 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.355 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.614 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.614 "name": "Existed_Raid", 00:15:25.614 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:25.614 "strip_size_kb": 64, 00:15:25.614 "state": "online", 00:15:25.614 "raid_level": "raid0", 00:15:25.614 "superblock": true, 00:15:25.614 "num_base_bdevs": 4, 00:15:25.614 "num_base_bdevs_discovered": 4, 00:15:25.614 "num_base_bdevs_operational": 4, 00:15:25.614 "base_bdevs_list": [ 00:15:25.614 { 00:15:25.614 "name": "NewBaseBdev", 00:15:25.614 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:25.614 "is_configured": true, 00:15:25.614 "data_offset": 2048, 00:15:25.614 "data_size": 63488 00:15:25.614 }, 00:15:25.614 { 00:15:25.614 "name": "BaseBdev2", 00:15:25.614 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:25.614 "is_configured": true, 00:15:25.614 "data_offset": 2048, 00:15:25.614 "data_size": 63488 00:15:25.614 }, 00:15:25.614 { 00:15:25.614 "name": "BaseBdev3", 00:15:25.614 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:25.614 "is_configured": true, 00:15:25.614 "data_offset": 2048, 00:15:25.614 "data_size": 63488 00:15:25.614 }, 00:15:25.614 { 00:15:25.614 "name": "BaseBdev4", 00:15:25.614 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:25.614 "is_configured": true, 00:15:25.614 "data_offset": 2048, 00:15:25.614 "data_size": 63488 00:15:25.614 } 00:15:25.614 ] 00:15:25.614 }' 00:15:25.614 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.614 10:22:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:26.183 [2024-07-15 10:22:50.890421] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.183 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:26.183 "name": "Existed_Raid", 00:15:26.183 "aliases": [ 00:15:26.183 "d682cbeb-122e-4d00-a596-cb002ca2d507" 00:15:26.183 ], 00:15:26.183 "product_name": "Raid Volume", 00:15:26.183 "block_size": 512, 00:15:26.183 "num_blocks": 253952, 00:15:26.183 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:26.183 "assigned_rate_limits": { 00:15:26.183 "rw_ios_per_sec": 0, 00:15:26.183 "rw_mbytes_per_sec": 0, 00:15:26.183 "r_mbytes_per_sec": 0, 00:15:26.183 "w_mbytes_per_sec": 0 00:15:26.183 }, 00:15:26.183 "claimed": false, 00:15:26.183 "zoned": false, 00:15:26.183 "supported_io_types": { 00:15:26.183 "read": true, 00:15:26.183 "write": true, 00:15:26.183 "unmap": true, 00:15:26.183 "flush": true, 00:15:26.183 "reset": true, 00:15:26.183 "nvme_admin": false, 00:15:26.183 "nvme_io": false, 00:15:26.183 "nvme_io_md": false, 00:15:26.183 "write_zeroes": true, 00:15:26.183 "zcopy": false, 00:15:26.183 "get_zone_info": false, 00:15:26.183 "zone_management": false, 00:15:26.183 "zone_append": false, 00:15:26.183 "compare": false, 00:15:26.183 "compare_and_write": false, 00:15:26.183 "abort": false, 00:15:26.183 "seek_hole": false, 00:15:26.183 "seek_data": false, 00:15:26.183 "copy": false, 00:15:26.183 "nvme_iov_md": false 00:15:26.183 }, 00:15:26.183 "memory_domains": [ 00:15:26.183 { 00:15:26.183 "dma_device_id": "system", 00:15:26.183 "dma_device_type": 1 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.183 "dma_device_type": 2 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "system", 00:15:26.183 "dma_device_type": 1 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.183 "dma_device_type": 2 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "system", 00:15:26.183 "dma_device_type": 1 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.183 "dma_device_type": 2 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "system", 00:15:26.183 "dma_device_type": 1 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.183 "dma_device_type": 2 00:15:26.183 } 00:15:26.183 ], 00:15:26.183 "driver_specific": { 00:15:26.183 "raid": { 00:15:26.183 "uuid": "d682cbeb-122e-4d00-a596-cb002ca2d507", 00:15:26.183 "strip_size_kb": 64, 00:15:26.183 "state": "online", 00:15:26.183 "raid_level": "raid0", 00:15:26.183 "superblock": true, 00:15:26.183 "num_base_bdevs": 4, 00:15:26.183 "num_base_bdevs_discovered": 4, 00:15:26.183 "num_base_bdevs_operational": 4, 00:15:26.183 "base_bdevs_list": [ 00:15:26.183 { 00:15:26.184 "name": "NewBaseBdev", 00:15:26.184 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:26.184 "is_configured": true, 00:15:26.184 "data_offset": 2048, 00:15:26.184 "data_size": 63488 00:15:26.184 }, 00:15:26.184 { 00:15:26.184 "name": "BaseBdev2", 00:15:26.184 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:26.184 "is_configured": true, 00:15:26.184 "data_offset": 2048, 00:15:26.184 "data_size": 63488 00:15:26.184 }, 00:15:26.184 { 00:15:26.184 "name": "BaseBdev3", 00:15:26.184 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:26.184 "is_configured": true, 00:15:26.184 "data_offset": 2048, 00:15:26.184 "data_size": 63488 00:15:26.184 }, 00:15:26.184 { 00:15:26.184 "name": "BaseBdev4", 00:15:26.184 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:26.184 "is_configured": true, 00:15:26.184 "data_offset": 2048, 00:15:26.184 "data_size": 63488 00:15:26.184 } 00:15:26.184 ] 00:15:26.184 } 00:15:26.184 } 00:15:26.184 }' 00:15:26.184 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.184 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:26.184 BaseBdev2 00:15:26.184 BaseBdev3 00:15:26.184 BaseBdev4' 00:15:26.184 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.184 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:26.184 10:22:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.442 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.442 "name": "NewBaseBdev", 00:15:26.442 "aliases": [ 00:15:26.442 "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82" 00:15:26.442 ], 00:15:26.442 "product_name": "Malloc disk", 00:15:26.442 "block_size": 512, 00:15:26.442 "num_blocks": 65536, 00:15:26.442 "uuid": "b2faadfa-b999-4cb8-9fae-8cc2bcddcb82", 00:15:26.442 "assigned_rate_limits": { 00:15:26.442 "rw_ios_per_sec": 0, 00:15:26.442 "rw_mbytes_per_sec": 0, 00:15:26.442 "r_mbytes_per_sec": 0, 00:15:26.442 "w_mbytes_per_sec": 0 00:15:26.442 }, 00:15:26.442 "claimed": true, 00:15:26.442 "claim_type": "exclusive_write", 00:15:26.442 "zoned": false, 00:15:26.442 "supported_io_types": { 00:15:26.442 "read": true, 00:15:26.442 "write": true, 00:15:26.442 "unmap": true, 00:15:26.442 "flush": true, 00:15:26.442 "reset": true, 00:15:26.442 "nvme_admin": false, 00:15:26.442 "nvme_io": false, 00:15:26.442 "nvme_io_md": false, 00:15:26.442 "write_zeroes": true, 00:15:26.442 "zcopy": true, 00:15:26.442 "get_zone_info": false, 00:15:26.442 "zone_management": false, 00:15:26.442 "zone_append": false, 00:15:26.442 "compare": false, 00:15:26.442 "compare_and_write": false, 00:15:26.442 "abort": true, 00:15:26.442 "seek_hole": false, 00:15:26.442 "seek_data": false, 00:15:26.442 "copy": true, 00:15:26.442 "nvme_iov_md": false 00:15:26.442 }, 00:15:26.442 "memory_domains": [ 00:15:26.442 { 00:15:26.442 "dma_device_id": "system", 00:15:26.442 "dma_device_type": 1 00:15:26.442 }, 00:15:26.442 { 00:15:26.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.442 "dma_device_type": 2 00:15:26.442 } 00:15:26.442 ], 00:15:26.442 "driver_specific": {} 00:15:26.442 }' 00:15:26.442 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.442 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.442 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.442 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.700 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.701 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.701 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.701 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:26.701 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.960 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.960 "name": "BaseBdev2", 00:15:26.960 "aliases": [ 00:15:26.960 "f30718ce-8dcc-48f1-a708-7357bb813d1d" 00:15:26.960 ], 00:15:26.960 "product_name": "Malloc disk", 00:15:26.960 "block_size": 512, 00:15:26.960 "num_blocks": 65536, 00:15:26.960 "uuid": "f30718ce-8dcc-48f1-a708-7357bb813d1d", 00:15:26.960 "assigned_rate_limits": { 00:15:26.960 "rw_ios_per_sec": 0, 00:15:26.960 "rw_mbytes_per_sec": 0, 00:15:26.960 "r_mbytes_per_sec": 0, 00:15:26.960 "w_mbytes_per_sec": 0 00:15:26.960 }, 00:15:26.960 "claimed": true, 00:15:26.960 "claim_type": "exclusive_write", 00:15:26.960 "zoned": false, 00:15:26.960 "supported_io_types": { 00:15:26.960 "read": true, 00:15:26.960 "write": true, 00:15:26.960 "unmap": true, 00:15:26.960 "flush": true, 00:15:26.960 "reset": true, 00:15:26.960 "nvme_admin": false, 00:15:26.960 "nvme_io": false, 00:15:26.960 "nvme_io_md": false, 00:15:26.960 "write_zeroes": true, 00:15:26.960 "zcopy": true, 00:15:26.960 "get_zone_info": false, 00:15:26.960 "zone_management": false, 00:15:26.960 "zone_append": false, 00:15:26.960 "compare": false, 00:15:26.960 "compare_and_write": false, 00:15:26.960 "abort": true, 00:15:26.960 "seek_hole": false, 00:15:26.960 "seek_data": false, 00:15:26.960 "copy": true, 00:15:26.960 "nvme_iov_md": false 00:15:26.960 }, 00:15:26.960 "memory_domains": [ 00:15:26.960 { 00:15:26.960 "dma_device_id": "system", 00:15:26.960 "dma_device_type": 1 00:15:26.960 }, 00:15:26.960 { 00:15:26.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.960 "dma_device_type": 2 00:15:26.960 } 00:15:26.960 ], 00:15:26.960 "driver_specific": {} 00:15:26.960 }' 00:15:26.960 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.960 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.960 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.960 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.960 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:27.220 10:22:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.480 "name": "BaseBdev3", 00:15:27.480 "aliases": [ 00:15:27.480 "8e7a351c-541e-4bc9-b297-25c1470bdda8" 00:15:27.480 ], 00:15:27.480 "product_name": "Malloc disk", 00:15:27.480 "block_size": 512, 00:15:27.480 "num_blocks": 65536, 00:15:27.480 "uuid": "8e7a351c-541e-4bc9-b297-25c1470bdda8", 00:15:27.480 "assigned_rate_limits": { 00:15:27.480 "rw_ios_per_sec": 0, 00:15:27.480 "rw_mbytes_per_sec": 0, 00:15:27.480 "r_mbytes_per_sec": 0, 00:15:27.480 "w_mbytes_per_sec": 0 00:15:27.480 }, 00:15:27.480 "claimed": true, 00:15:27.480 "claim_type": "exclusive_write", 00:15:27.480 "zoned": false, 00:15:27.480 "supported_io_types": { 00:15:27.480 "read": true, 00:15:27.480 "write": true, 00:15:27.480 "unmap": true, 00:15:27.480 "flush": true, 00:15:27.480 "reset": true, 00:15:27.480 "nvme_admin": false, 00:15:27.480 "nvme_io": false, 00:15:27.480 "nvme_io_md": false, 00:15:27.480 "write_zeroes": true, 00:15:27.480 "zcopy": true, 00:15:27.480 "get_zone_info": false, 00:15:27.480 "zone_management": false, 00:15:27.480 "zone_append": false, 00:15:27.480 "compare": false, 00:15:27.480 "compare_and_write": false, 00:15:27.480 "abort": true, 00:15:27.480 "seek_hole": false, 00:15:27.480 "seek_data": false, 00:15:27.480 "copy": true, 00:15:27.480 "nvme_iov_md": false 00:15:27.480 }, 00:15:27.480 "memory_domains": [ 00:15:27.480 { 00:15:27.480 "dma_device_id": "system", 00:15:27.480 "dma_device_type": 1 00:15:27.480 }, 00:15:27.480 { 00:15:27.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.480 "dma_device_type": 2 00:15:27.480 } 00:15:27.480 ], 00:15:27.480 "driver_specific": {} 00:15:27.480 }' 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.480 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:27.738 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:27.997 "name": "BaseBdev4", 00:15:27.997 "aliases": [ 00:15:27.997 "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf" 00:15:27.997 ], 00:15:27.997 "product_name": "Malloc disk", 00:15:27.997 "block_size": 512, 00:15:27.997 "num_blocks": 65536, 00:15:27.997 "uuid": "7e9b2f59-5f63-4fc6-8b4c-34eff5dc2faf", 00:15:27.997 "assigned_rate_limits": { 00:15:27.997 "rw_ios_per_sec": 0, 00:15:27.997 "rw_mbytes_per_sec": 0, 00:15:27.997 "r_mbytes_per_sec": 0, 00:15:27.997 "w_mbytes_per_sec": 0 00:15:27.997 }, 00:15:27.997 "claimed": true, 00:15:27.997 "claim_type": "exclusive_write", 00:15:27.997 "zoned": false, 00:15:27.997 "supported_io_types": { 00:15:27.997 "read": true, 00:15:27.997 "write": true, 00:15:27.997 "unmap": true, 00:15:27.997 "flush": true, 00:15:27.997 "reset": true, 00:15:27.997 "nvme_admin": false, 00:15:27.997 "nvme_io": false, 00:15:27.997 "nvme_io_md": false, 00:15:27.997 "write_zeroes": true, 00:15:27.997 "zcopy": true, 00:15:27.997 "get_zone_info": false, 00:15:27.997 "zone_management": false, 00:15:27.997 "zone_append": false, 00:15:27.997 "compare": false, 00:15:27.997 "compare_and_write": false, 00:15:27.997 "abort": true, 00:15:27.997 "seek_hole": false, 00:15:27.997 "seek_data": false, 00:15:27.997 "copy": true, 00:15:27.997 "nvme_iov_md": false 00:15:27.997 }, 00:15:27.997 "memory_domains": [ 00:15:27.997 { 00:15:27.997 "dma_device_id": "system", 00:15:27.997 "dma_device_type": 1 00:15:27.997 }, 00:15:27.997 { 00:15:27.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.997 "dma_device_type": 2 00:15:27.997 } 00:15:27.997 ], 00:15:27.997 "driver_specific": {} 00:15:27.997 }' 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.997 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.257 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.257 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.257 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.257 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.257 10:22:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.517 [2024-07-15 10:22:53.047798] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.517 [2024-07-15 10:22:53.047819] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.517 [2024-07-15 10:22:53.047853] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.517 [2024-07-15 10:22:53.047895] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.517 [2024-07-15 10:22:53.047907] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2213c90 name Existed_Raid, state offline 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1808659 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1808659 ']' 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1808659 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1808659 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1808659' 00:15:28.517 killing process with pid 1808659 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1808659 00:15:28.517 [2024-07-15 10:22:53.118639] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.517 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1808659 00:15:28.517 [2024-07-15 10:22:53.149949] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.776 10:22:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:28.776 00:15:28.776 real 0m24.211s 00:15:28.776 user 0m44.355s 00:15:28.776 sys 0m4.537s 00:15:28.776 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.776 10:22:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.776 ************************************ 00:15:28.776 END TEST raid_state_function_test_sb 00:15:28.776 ************************************ 00:15:28.776 10:22:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:28.776 10:22:53 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:28.776 10:22:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:28.776 10:22:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.776 10:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.776 ************************************ 00:15:28.776 START TEST raid_superblock_test 00:15:28.776 ************************************ 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1813565 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1813565 /var/tmp/spdk-raid.sock 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1813565 ']' 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.776 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.776 [2024-07-15 10:22:53.448611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:28.776 [2024-07-15 10:22:53.448653] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813565 ] 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.776 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:28.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.777 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:28.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.777 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:28.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.777 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:28.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.777 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:28.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.777 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:28.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:28.777 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:28.777 [2024-07-15 10:22:53.535076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.035 [2024-07-15 10:22:53.610210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.035 [2024-07-15 10:22:53.667259] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.035 [2024-07-15 10:22:53.667286] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.603 10:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.604 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:29.863 malloc1 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.863 [2024-07-15 10:22:54.583781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.863 [2024-07-15 10:22:54.583817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.863 [2024-07-15 10:22:54.583830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21412f0 00:15:29.863 [2024-07-15 10:22:54.583838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.863 [2024-07-15 10:22:54.584999] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.863 [2024-07-15 10:22:54.585021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.863 pt1 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:29.863 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:30.123 malloc2 00:15:30.123 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:30.382 [2024-07-15 10:22:54.924430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:30.382 [2024-07-15 10:22:54.924463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.382 [2024-07-15 10:22:54.924475] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21426d0 00:15:30.382 [2024-07-15 10:22:54.924483] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.382 [2024-07-15 10:22:54.925513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.382 [2024-07-15 10:22:54.925535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:30.382 pt2 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.382 10:22:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:30.382 malloc3 00:15:30.382 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:30.642 [2024-07-15 10:22:55.260859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:30.642 [2024-07-15 10:22:55.260894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.642 [2024-07-15 10:22:55.260910] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22db6b0 00:15:30.642 [2024-07-15 10:22:55.260918] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.642 [2024-07-15 10:22:55.261971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.642 [2024-07-15 10:22:55.261992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:30.642 pt3 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:30.642 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:30.901 malloc4 00:15:30.901 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:30.901 [2024-07-15 10:22:55.605743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:30.901 [2024-07-15 10:22:55.605778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.901 [2024-07-15 10:22:55.605790] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22d9370 00:15:30.901 [2024-07-15 10:22:55.605814] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.901 [2024-07-15 10:22:55.606845] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.901 [2024-07-15 10:22:55.606868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:30.901 pt4 00:15:30.901 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:30.901 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:30.901 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:31.185 [2024-07-15 10:22:55.774195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.185 [2024-07-15 10:22:55.775044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.185 [2024-07-15 10:22:55.775082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:31.185 [2024-07-15 10:22:55.775110] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:31.185 [2024-07-15 10:22:55.775225] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213a560 00:15:31.185 [2024-07-15 10:22:55.775233] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:31.185 [2024-07-15 10:22:55.775359] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22da680 00:15:31.185 [2024-07-15 10:22:55.775452] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213a560 00:15:31.185 [2024-07-15 10:22:55.775462] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213a560 00:15:31.185 [2024-07-15 10:22:55.775525] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.185 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.445 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.445 "name": "raid_bdev1", 00:15:31.445 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:31.445 "strip_size_kb": 64, 00:15:31.445 "state": "online", 00:15:31.445 "raid_level": "raid0", 00:15:31.445 "superblock": true, 00:15:31.445 "num_base_bdevs": 4, 00:15:31.445 "num_base_bdevs_discovered": 4, 00:15:31.445 "num_base_bdevs_operational": 4, 00:15:31.445 "base_bdevs_list": [ 00:15:31.445 { 00:15:31.445 "name": "pt1", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 }, 00:15:31.445 { 00:15:31.445 "name": "pt2", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 }, 00:15:31.445 { 00:15:31.445 "name": "pt3", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 }, 00:15:31.445 { 00:15:31.445 "name": "pt4", 00:15:31.445 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:31.445 "is_configured": true, 00:15:31.445 "data_offset": 2048, 00:15:31.445 "data_size": 63488 00:15:31.445 } 00:15:31.445 ] 00:15:31.445 }' 00:15:31.445 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.445 10:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:31.704 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:31.963 [2024-07-15 10:22:56.572444] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.963 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:31.963 "name": "raid_bdev1", 00:15:31.963 "aliases": [ 00:15:31.963 "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2" 00:15:31.963 ], 00:15:31.963 "product_name": "Raid Volume", 00:15:31.963 "block_size": 512, 00:15:31.963 "num_blocks": 253952, 00:15:31.963 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:31.963 "assigned_rate_limits": { 00:15:31.963 "rw_ios_per_sec": 0, 00:15:31.963 "rw_mbytes_per_sec": 0, 00:15:31.963 "r_mbytes_per_sec": 0, 00:15:31.963 "w_mbytes_per_sec": 0 00:15:31.963 }, 00:15:31.963 "claimed": false, 00:15:31.963 "zoned": false, 00:15:31.963 "supported_io_types": { 00:15:31.963 "read": true, 00:15:31.963 "write": true, 00:15:31.963 "unmap": true, 00:15:31.963 "flush": true, 00:15:31.963 "reset": true, 00:15:31.963 "nvme_admin": false, 00:15:31.963 "nvme_io": false, 00:15:31.963 "nvme_io_md": false, 00:15:31.963 "write_zeroes": true, 00:15:31.963 "zcopy": false, 00:15:31.963 "get_zone_info": false, 00:15:31.963 "zone_management": false, 00:15:31.963 "zone_append": false, 00:15:31.963 "compare": false, 00:15:31.963 "compare_and_write": false, 00:15:31.963 "abort": false, 00:15:31.963 "seek_hole": false, 00:15:31.963 "seek_data": false, 00:15:31.963 "copy": false, 00:15:31.963 "nvme_iov_md": false 00:15:31.963 }, 00:15:31.963 "memory_domains": [ 00:15:31.963 { 00:15:31.963 "dma_device_id": "system", 00:15:31.963 "dma_device_type": 1 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.963 "dma_device_type": 2 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "system", 00:15:31.963 "dma_device_type": 1 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.963 "dma_device_type": 2 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "system", 00:15:31.963 "dma_device_type": 1 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.963 "dma_device_type": 2 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "system", 00:15:31.963 "dma_device_type": 1 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.963 "dma_device_type": 2 00:15:31.963 } 00:15:31.963 ], 00:15:31.963 "driver_specific": { 00:15:31.963 "raid": { 00:15:31.963 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:31.963 "strip_size_kb": 64, 00:15:31.963 "state": "online", 00:15:31.963 "raid_level": "raid0", 00:15:31.963 "superblock": true, 00:15:31.963 "num_base_bdevs": 4, 00:15:31.963 "num_base_bdevs_discovered": 4, 00:15:31.963 "num_base_bdevs_operational": 4, 00:15:31.963 "base_bdevs_list": [ 00:15:31.963 { 00:15:31.963 "name": "pt1", 00:15:31.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.963 "is_configured": true, 00:15:31.963 "data_offset": 2048, 00:15:31.963 "data_size": 63488 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "name": "pt2", 00:15:31.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.963 "is_configured": true, 00:15:31.963 "data_offset": 2048, 00:15:31.963 "data_size": 63488 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "name": "pt3", 00:15:31.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:31.963 "is_configured": true, 00:15:31.963 "data_offset": 2048, 00:15:31.963 "data_size": 63488 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "name": "pt4", 00:15:31.963 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:31.963 "is_configured": true, 00:15:31.963 "data_offset": 2048, 00:15:31.963 "data_size": 63488 00:15:31.963 } 00:15:31.963 ] 00:15:31.963 } 00:15:31.963 } 00:15:31.963 }' 00:15:31.963 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.963 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:31.963 pt2 00:15:31.963 pt3 00:15:31.963 pt4' 00:15:31.963 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.963 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:31.963 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.222 "name": "pt1", 00:15:32.222 "aliases": [ 00:15:32.222 "00000000-0000-0000-0000-000000000001" 00:15:32.222 ], 00:15:32.222 "product_name": "passthru", 00:15:32.222 "block_size": 512, 00:15:32.222 "num_blocks": 65536, 00:15:32.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.222 "assigned_rate_limits": { 00:15:32.222 "rw_ios_per_sec": 0, 00:15:32.222 "rw_mbytes_per_sec": 0, 00:15:32.222 "r_mbytes_per_sec": 0, 00:15:32.222 "w_mbytes_per_sec": 0 00:15:32.222 }, 00:15:32.222 "claimed": true, 00:15:32.222 "claim_type": "exclusive_write", 00:15:32.222 "zoned": false, 00:15:32.222 "supported_io_types": { 00:15:32.222 "read": true, 00:15:32.222 "write": true, 00:15:32.222 "unmap": true, 00:15:32.222 "flush": true, 00:15:32.222 "reset": true, 00:15:32.222 "nvme_admin": false, 00:15:32.222 "nvme_io": false, 00:15:32.222 "nvme_io_md": false, 00:15:32.222 "write_zeroes": true, 00:15:32.222 "zcopy": true, 00:15:32.222 "get_zone_info": false, 00:15:32.222 "zone_management": false, 00:15:32.222 "zone_append": false, 00:15:32.222 "compare": false, 00:15:32.222 "compare_and_write": false, 00:15:32.222 "abort": true, 00:15:32.222 "seek_hole": false, 00:15:32.222 "seek_data": false, 00:15:32.222 "copy": true, 00:15:32.222 "nvme_iov_md": false 00:15:32.222 }, 00:15:32.222 "memory_domains": [ 00:15:32.222 { 00:15:32.222 "dma_device_id": "system", 00:15:32.222 "dma_device_type": 1 00:15:32.222 }, 00:15:32.222 { 00:15:32.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.222 "dma_device_type": 2 00:15:32.222 } 00:15:32.222 ], 00:15:32.222 "driver_specific": { 00:15:32.222 "passthru": { 00:15:32.222 "name": "pt1", 00:15:32.222 "base_bdev_name": "malloc1" 00:15:32.222 } 00:15:32.222 } 00:15:32.222 }' 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.222 10:22:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:32.482 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.742 "name": "pt2", 00:15:32.742 "aliases": [ 00:15:32.742 "00000000-0000-0000-0000-000000000002" 00:15:32.742 ], 00:15:32.742 "product_name": "passthru", 00:15:32.742 "block_size": 512, 00:15:32.742 "num_blocks": 65536, 00:15:32.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.742 "assigned_rate_limits": { 00:15:32.742 "rw_ios_per_sec": 0, 00:15:32.742 "rw_mbytes_per_sec": 0, 00:15:32.742 "r_mbytes_per_sec": 0, 00:15:32.742 "w_mbytes_per_sec": 0 00:15:32.742 }, 00:15:32.742 "claimed": true, 00:15:32.742 "claim_type": "exclusive_write", 00:15:32.742 "zoned": false, 00:15:32.742 "supported_io_types": { 00:15:32.742 "read": true, 00:15:32.742 "write": true, 00:15:32.742 "unmap": true, 00:15:32.742 "flush": true, 00:15:32.742 "reset": true, 00:15:32.742 "nvme_admin": false, 00:15:32.742 "nvme_io": false, 00:15:32.742 "nvme_io_md": false, 00:15:32.742 "write_zeroes": true, 00:15:32.742 "zcopy": true, 00:15:32.742 "get_zone_info": false, 00:15:32.742 "zone_management": false, 00:15:32.742 "zone_append": false, 00:15:32.742 "compare": false, 00:15:32.742 "compare_and_write": false, 00:15:32.742 "abort": true, 00:15:32.742 "seek_hole": false, 00:15:32.742 "seek_data": false, 00:15:32.742 "copy": true, 00:15:32.742 "nvme_iov_md": false 00:15:32.742 }, 00:15:32.742 "memory_domains": [ 00:15:32.742 { 00:15:32.742 "dma_device_id": "system", 00:15:32.742 "dma_device_type": 1 00:15:32.742 }, 00:15:32.742 { 00:15:32.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.742 "dma_device_type": 2 00:15:32.742 } 00:15:32.742 ], 00:15:32.742 "driver_specific": { 00:15:32.742 "passthru": { 00:15:32.742 "name": "pt2", 00:15:32.742 "base_bdev_name": "malloc2" 00:15:32.742 } 00:15:32.742 } 00:15:32.742 }' 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.742 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.002 "name": "pt3", 00:15:33.002 "aliases": [ 00:15:33.002 "00000000-0000-0000-0000-000000000003" 00:15:33.002 ], 00:15:33.002 "product_name": "passthru", 00:15:33.002 "block_size": 512, 00:15:33.002 "num_blocks": 65536, 00:15:33.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:33.002 "assigned_rate_limits": { 00:15:33.002 "rw_ios_per_sec": 0, 00:15:33.002 "rw_mbytes_per_sec": 0, 00:15:33.002 "r_mbytes_per_sec": 0, 00:15:33.002 "w_mbytes_per_sec": 0 00:15:33.002 }, 00:15:33.002 "claimed": true, 00:15:33.002 "claim_type": "exclusive_write", 00:15:33.002 "zoned": false, 00:15:33.002 "supported_io_types": { 00:15:33.002 "read": true, 00:15:33.002 "write": true, 00:15:33.002 "unmap": true, 00:15:33.002 "flush": true, 00:15:33.002 "reset": true, 00:15:33.002 "nvme_admin": false, 00:15:33.002 "nvme_io": false, 00:15:33.002 "nvme_io_md": false, 00:15:33.002 "write_zeroes": true, 00:15:33.002 "zcopy": true, 00:15:33.002 "get_zone_info": false, 00:15:33.002 "zone_management": false, 00:15:33.002 "zone_append": false, 00:15:33.002 "compare": false, 00:15:33.002 "compare_and_write": false, 00:15:33.002 "abort": true, 00:15:33.002 "seek_hole": false, 00:15:33.002 "seek_data": false, 00:15:33.002 "copy": true, 00:15:33.002 "nvme_iov_md": false 00:15:33.002 }, 00:15:33.002 "memory_domains": [ 00:15:33.002 { 00:15:33.002 "dma_device_id": "system", 00:15:33.002 "dma_device_type": 1 00:15:33.002 }, 00:15:33.002 { 00:15:33.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.002 "dma_device_type": 2 00:15:33.002 } 00:15:33.002 ], 00:15:33.002 "driver_specific": { 00:15:33.002 "passthru": { 00:15:33.002 "name": "pt3", 00:15:33.002 "base_bdev_name": "malloc3" 00:15:33.002 } 00:15:33.002 } 00:15:33.002 }' 00:15:33.002 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.261 10:22:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.261 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.261 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:33.261 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.261 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:33.521 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.521 "name": "pt4", 00:15:33.521 "aliases": [ 00:15:33.521 "00000000-0000-0000-0000-000000000004" 00:15:33.521 ], 00:15:33.521 "product_name": "passthru", 00:15:33.521 "block_size": 512, 00:15:33.521 "num_blocks": 65536, 00:15:33.521 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:33.521 "assigned_rate_limits": { 00:15:33.521 "rw_ios_per_sec": 0, 00:15:33.521 "rw_mbytes_per_sec": 0, 00:15:33.521 "r_mbytes_per_sec": 0, 00:15:33.521 "w_mbytes_per_sec": 0 00:15:33.521 }, 00:15:33.521 "claimed": true, 00:15:33.521 "claim_type": "exclusive_write", 00:15:33.521 "zoned": false, 00:15:33.521 "supported_io_types": { 00:15:33.521 "read": true, 00:15:33.521 "write": true, 00:15:33.521 "unmap": true, 00:15:33.521 "flush": true, 00:15:33.521 "reset": true, 00:15:33.521 "nvme_admin": false, 00:15:33.521 "nvme_io": false, 00:15:33.521 "nvme_io_md": false, 00:15:33.521 "write_zeroes": true, 00:15:33.521 "zcopy": true, 00:15:33.521 "get_zone_info": false, 00:15:33.521 "zone_management": false, 00:15:33.521 "zone_append": false, 00:15:33.521 "compare": false, 00:15:33.521 "compare_and_write": false, 00:15:33.521 "abort": true, 00:15:33.521 "seek_hole": false, 00:15:33.521 "seek_data": false, 00:15:33.521 "copy": true, 00:15:33.521 "nvme_iov_md": false 00:15:33.521 }, 00:15:33.521 "memory_domains": [ 00:15:33.521 { 00:15:33.521 "dma_device_id": "system", 00:15:33.521 "dma_device_type": 1 00:15:33.521 }, 00:15:33.521 { 00:15:33.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.521 "dma_device_type": 2 00:15:33.521 } 00:15:33.521 ], 00:15:33.521 "driver_specific": { 00:15:33.521 "passthru": { 00:15:33.521 "name": "pt4", 00:15:33.521 "base_bdev_name": "malloc4" 00:15:33.521 } 00:15:33.521 } 00:15:33.521 }' 00:15:33.521 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.521 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.521 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.521 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:33.779 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:34.039 [2024-07-15 10:22:58.661824] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.039 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2d0dec23-be36-47bb-9e8c-f3abbf94b5e2 00:15:34.039 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2d0dec23-be36-47bb-9e8c-f3abbf94b5e2 ']' 00:15:34.039 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:34.299 [2024-07-15 10:22:58.830063] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.299 [2024-07-15 10:22:58.830078] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.299 [2024-07-15 10:22:58.830114] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.299 [2024-07-15 10:22:58.830157] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.299 [2024-07-15 10:22:58.830165] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213a560 name raid_bdev1, state offline 00:15:34.299 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.299 10:22:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:34.299 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:34.299 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:34.299 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.299 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:34.558 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.558 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:34.817 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.817 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:34.817 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.817 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.076 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:35.336 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:35.336 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:35.336 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:35.336 10:22:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:35.336 [2024-07-15 10:23:00.017095] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.336 [2024-07-15 10:23:00.018081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.336 [2024-07-15 10:23:00.018114] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:35.336 [2024-07-15 10:23:00.018135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:35.336 [2024-07-15 10:23:00.018168] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:35.336 [2024-07-15 10:23:00.018196] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:35.336 [2024-07-15 10:23:00.018212] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:35.336 [2024-07-15 10:23:00.018226] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:35.336 [2024-07-15 10:23:00.018238] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.336 [2024-07-15 10:23:00.018244] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22e4d50 name raid_bdev1, state configuring 00:15:35.336 request: 00:15:35.336 { 00:15:35.336 "name": "raid_bdev1", 00:15:35.336 "raid_level": "raid0", 00:15:35.336 "base_bdevs": [ 00:15:35.336 "malloc1", 00:15:35.336 "malloc2", 00:15:35.336 "malloc3", 00:15:35.336 "malloc4" 00:15:35.336 ], 00:15:35.336 "strip_size_kb": 64, 00:15:35.336 "superblock": false, 00:15:35.336 "method": "bdev_raid_create", 00:15:35.336 "req_id": 1 00:15:35.336 } 00:15:35.336 Got JSON-RPC error response 00:15:35.336 response: 00:15:35.336 { 00:15:35.336 "code": -17, 00:15:35.336 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.336 } 00:15:35.336 10:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:35.336 10:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:35.336 10:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:35.336 10:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:35.336 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.336 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:35.595 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:35.595 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:35.595 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.595 [2024-07-15 10:23:00.365962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.595 [2024-07-15 10:23:00.365990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.595 [2024-07-15 10:23:00.366002] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e43f0 00:15:35.595 [2024-07-15 10:23:00.366010] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.595 [2024-07-15 10:23:00.367059] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.595 [2024-07-15 10:23:00.367096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.595 [2024-07-15 10:23:00.367136] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.595 [2024-07-15 10:23:00.367154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.595 pt1 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.854 "name": "raid_bdev1", 00:15:35.854 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:35.854 "strip_size_kb": 64, 00:15:35.854 "state": "configuring", 00:15:35.854 "raid_level": "raid0", 00:15:35.854 "superblock": true, 00:15:35.854 "num_base_bdevs": 4, 00:15:35.854 "num_base_bdevs_discovered": 1, 00:15:35.854 "num_base_bdevs_operational": 4, 00:15:35.854 "base_bdevs_list": [ 00:15:35.854 { 00:15:35.854 "name": "pt1", 00:15:35.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.854 "is_configured": true, 00:15:35.854 "data_offset": 2048, 00:15:35.854 "data_size": 63488 00:15:35.854 }, 00:15:35.854 { 00:15:35.854 "name": null, 00:15:35.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.854 "is_configured": false, 00:15:35.854 "data_offset": 2048, 00:15:35.854 "data_size": 63488 00:15:35.854 }, 00:15:35.854 { 00:15:35.854 "name": null, 00:15:35.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:35.854 "is_configured": false, 00:15:35.854 "data_offset": 2048, 00:15:35.854 "data_size": 63488 00:15:35.854 }, 00:15:35.854 { 00:15:35.854 "name": null, 00:15:35.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:35.854 "is_configured": false, 00:15:35.854 "data_offset": 2048, 00:15:35.854 "data_size": 63488 00:15:35.854 } 00:15:35.854 ] 00:15:35.854 }' 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.854 10:23:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.423 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:36.423 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.423 [2024-07-15 10:23:01.204129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.423 [2024-07-15 10:23:01.204166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.423 [2024-07-15 10:23:01.204180] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213b6e0 00:15:36.423 [2024-07-15 10:23:01.204189] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.423 [2024-07-15 10:23:01.204423] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.423 [2024-07-15 10:23:01.204435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.423 [2024-07-15 10:23:01.204476] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.423 [2024-07-15 10:23:01.204489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.423 pt2 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:36.683 [2024-07-15 10:23:01.372573] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.683 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.945 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.945 "name": "raid_bdev1", 00:15:36.945 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:36.945 "strip_size_kb": 64, 00:15:36.945 "state": "configuring", 00:15:36.945 "raid_level": "raid0", 00:15:36.945 "superblock": true, 00:15:36.945 "num_base_bdevs": 4, 00:15:36.945 "num_base_bdevs_discovered": 1, 00:15:36.945 "num_base_bdevs_operational": 4, 00:15:36.945 "base_bdevs_list": [ 00:15:36.945 { 00:15:36.945 "name": "pt1", 00:15:36.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.945 "is_configured": true, 00:15:36.945 "data_offset": 2048, 00:15:36.945 "data_size": 63488 00:15:36.945 }, 00:15:36.945 { 00:15:36.945 "name": null, 00:15:36.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.945 "is_configured": false, 00:15:36.945 "data_offset": 2048, 00:15:36.945 "data_size": 63488 00:15:36.945 }, 00:15:36.945 { 00:15:36.945 "name": null, 00:15:36.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.945 "is_configured": false, 00:15:36.945 "data_offset": 2048, 00:15:36.945 "data_size": 63488 00:15:36.945 }, 00:15:36.945 { 00:15:36.945 "name": null, 00:15:36.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.945 "is_configured": false, 00:15:36.945 "data_offset": 2048, 00:15:36.945 "data_size": 63488 00:15:36.945 } 00:15:36.945 ] 00:15:36.945 }' 00:15:36.945 10:23:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.945 10:23:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.510 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:37.510 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:37.510 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.510 [2024-07-15 10:23:02.182647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.510 [2024-07-15 10:23:02.182683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.510 [2024-07-15 10:23:02.182696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213b910 00:15:37.510 [2024-07-15 10:23:02.182704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.510 [2024-07-15 10:23:02.182945] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.511 [2024-07-15 10:23:02.182962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.511 [2024-07-15 10:23:02.183015] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:37.511 [2024-07-15 10:23:02.183031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.511 pt2 00:15:37.511 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:37.511 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:37.511 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:37.768 [2024-07-15 10:23:02.363132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:37.768 [2024-07-15 10:23:02.363160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.768 [2024-07-15 10:23:02.363171] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22d9ac0 00:15:37.768 [2024-07-15 10:23:02.363178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.768 [2024-07-15 10:23:02.363373] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.768 [2024-07-15 10:23:02.363384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:37.768 [2024-07-15 10:23:02.363419] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:37.768 [2024-07-15 10:23:02.363431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:37.768 pt3 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:37.768 [2024-07-15 10:23:02.535577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:37.768 [2024-07-15 10:23:02.535603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.768 [2024-07-15 10:23:02.535614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2137f60 00:15:37.768 [2024-07-15 10:23:02.535622] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.768 [2024-07-15 10:23:02.535816] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.768 [2024-07-15 10:23:02.535828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:37.768 [2024-07-15 10:23:02.535862] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:37.768 [2024-07-15 10:23:02.535874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:37.768 [2024-07-15 10:23:02.535974] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x213bbc0 00:15:37.768 [2024-07-15 10:23:02.535982] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:37.768 [2024-07-15 10:23:02.536096] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2140f70 00:15:37.768 [2024-07-15 10:23:02.536181] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x213bbc0 00:15:37.768 [2024-07-15 10:23:02.536188] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x213bbc0 00:15:37.768 [2024-07-15 10:23:02.536251] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.768 pt4 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:37.768 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.026 "name": "raid_bdev1", 00:15:38.026 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:38.026 "strip_size_kb": 64, 00:15:38.026 "state": "online", 00:15:38.026 "raid_level": "raid0", 00:15:38.026 "superblock": true, 00:15:38.026 "num_base_bdevs": 4, 00:15:38.026 "num_base_bdevs_discovered": 4, 00:15:38.026 "num_base_bdevs_operational": 4, 00:15:38.026 "base_bdevs_list": [ 00:15:38.026 { 00:15:38.026 "name": "pt1", 00:15:38.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.026 "is_configured": true, 00:15:38.026 "data_offset": 2048, 00:15:38.026 "data_size": 63488 00:15:38.026 }, 00:15:38.026 { 00:15:38.026 "name": "pt2", 00:15:38.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.026 "is_configured": true, 00:15:38.026 "data_offset": 2048, 00:15:38.026 "data_size": 63488 00:15:38.026 }, 00:15:38.026 { 00:15:38.026 "name": "pt3", 00:15:38.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.026 "is_configured": true, 00:15:38.026 "data_offset": 2048, 00:15:38.026 "data_size": 63488 00:15:38.026 }, 00:15:38.026 { 00:15:38.026 "name": "pt4", 00:15:38.026 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.026 "is_configured": true, 00:15:38.026 "data_offset": 2048, 00:15:38.026 "data_size": 63488 00:15:38.026 } 00:15:38.026 ] 00:15:38.026 }' 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.026 10:23:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:38.598 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:38.599 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:38.599 [2024-07-15 10:23:03.349850] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.599 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:38.599 "name": "raid_bdev1", 00:15:38.599 "aliases": [ 00:15:38.599 "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2" 00:15:38.599 ], 00:15:38.599 "product_name": "Raid Volume", 00:15:38.599 "block_size": 512, 00:15:38.599 "num_blocks": 253952, 00:15:38.599 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:38.599 "assigned_rate_limits": { 00:15:38.599 "rw_ios_per_sec": 0, 00:15:38.599 "rw_mbytes_per_sec": 0, 00:15:38.599 "r_mbytes_per_sec": 0, 00:15:38.599 "w_mbytes_per_sec": 0 00:15:38.599 }, 00:15:38.599 "claimed": false, 00:15:38.599 "zoned": false, 00:15:38.599 "supported_io_types": { 00:15:38.599 "read": true, 00:15:38.599 "write": true, 00:15:38.599 "unmap": true, 00:15:38.599 "flush": true, 00:15:38.599 "reset": true, 00:15:38.599 "nvme_admin": false, 00:15:38.599 "nvme_io": false, 00:15:38.599 "nvme_io_md": false, 00:15:38.599 "write_zeroes": true, 00:15:38.599 "zcopy": false, 00:15:38.599 "get_zone_info": false, 00:15:38.599 "zone_management": false, 00:15:38.599 "zone_append": false, 00:15:38.599 "compare": false, 00:15:38.599 "compare_and_write": false, 00:15:38.599 "abort": false, 00:15:38.599 "seek_hole": false, 00:15:38.599 "seek_data": false, 00:15:38.599 "copy": false, 00:15:38.599 "nvme_iov_md": false 00:15:38.599 }, 00:15:38.599 "memory_domains": [ 00:15:38.599 { 00:15:38.599 "dma_device_id": "system", 00:15:38.599 "dma_device_type": 1 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.599 "dma_device_type": 2 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "system", 00:15:38.599 "dma_device_type": 1 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.599 "dma_device_type": 2 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "system", 00:15:38.599 "dma_device_type": 1 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.599 "dma_device_type": 2 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "system", 00:15:38.599 "dma_device_type": 1 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.599 "dma_device_type": 2 00:15:38.599 } 00:15:38.599 ], 00:15:38.599 "driver_specific": { 00:15:38.599 "raid": { 00:15:38.599 "uuid": "2d0dec23-be36-47bb-9e8c-f3abbf94b5e2", 00:15:38.599 "strip_size_kb": 64, 00:15:38.599 "state": "online", 00:15:38.599 "raid_level": "raid0", 00:15:38.599 "superblock": true, 00:15:38.599 "num_base_bdevs": 4, 00:15:38.599 "num_base_bdevs_discovered": 4, 00:15:38.599 "num_base_bdevs_operational": 4, 00:15:38.599 "base_bdevs_list": [ 00:15:38.599 { 00:15:38.599 "name": "pt1", 00:15:38.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.599 "is_configured": true, 00:15:38.599 "data_offset": 2048, 00:15:38.599 "data_size": 63488 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "name": "pt2", 00:15:38.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.599 "is_configured": true, 00:15:38.599 "data_offset": 2048, 00:15:38.599 "data_size": 63488 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "name": "pt3", 00:15:38.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.599 "is_configured": true, 00:15:38.599 "data_offset": 2048, 00:15:38.599 "data_size": 63488 00:15:38.599 }, 00:15:38.599 { 00:15:38.599 "name": "pt4", 00:15:38.599 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.599 "is_configured": true, 00:15:38.599 "data_offset": 2048, 00:15:38.599 "data_size": 63488 00:15:38.599 } 00:15:38.599 ] 00:15:38.599 } 00:15:38.599 } 00:15:38.599 }' 00:15:38.599 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:38.858 pt2 00:15:38.858 pt3 00:15:38.858 pt4' 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.858 "name": "pt1", 00:15:38.858 "aliases": [ 00:15:38.858 "00000000-0000-0000-0000-000000000001" 00:15:38.858 ], 00:15:38.858 "product_name": "passthru", 00:15:38.858 "block_size": 512, 00:15:38.858 "num_blocks": 65536, 00:15:38.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.858 "assigned_rate_limits": { 00:15:38.858 "rw_ios_per_sec": 0, 00:15:38.858 "rw_mbytes_per_sec": 0, 00:15:38.858 "r_mbytes_per_sec": 0, 00:15:38.858 "w_mbytes_per_sec": 0 00:15:38.858 }, 00:15:38.858 "claimed": true, 00:15:38.858 "claim_type": "exclusive_write", 00:15:38.858 "zoned": false, 00:15:38.858 "supported_io_types": { 00:15:38.858 "read": true, 00:15:38.858 "write": true, 00:15:38.858 "unmap": true, 00:15:38.858 "flush": true, 00:15:38.858 "reset": true, 00:15:38.858 "nvme_admin": false, 00:15:38.858 "nvme_io": false, 00:15:38.858 "nvme_io_md": false, 00:15:38.858 "write_zeroes": true, 00:15:38.858 "zcopy": true, 00:15:38.858 "get_zone_info": false, 00:15:38.858 "zone_management": false, 00:15:38.858 "zone_append": false, 00:15:38.858 "compare": false, 00:15:38.858 "compare_and_write": false, 00:15:38.858 "abort": true, 00:15:38.858 "seek_hole": false, 00:15:38.858 "seek_data": false, 00:15:38.858 "copy": true, 00:15:38.858 "nvme_iov_md": false 00:15:38.858 }, 00:15:38.858 "memory_domains": [ 00:15:38.858 { 00:15:38.858 "dma_device_id": "system", 00:15:38.858 "dma_device_type": 1 00:15:38.858 }, 00:15:38.858 { 00:15:38.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.858 "dma_device_type": 2 00:15:38.858 } 00:15:38.858 ], 00:15:38.858 "driver_specific": { 00:15:38.858 "passthru": { 00:15:38.858 "name": "pt1", 00:15:38.858 "base_bdev_name": "malloc1" 00:15:38.858 } 00:15:38.858 } 00:15:38.858 }' 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.858 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:39.115 10:23:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:39.373 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:39.373 "name": "pt2", 00:15:39.373 "aliases": [ 00:15:39.373 "00000000-0000-0000-0000-000000000002" 00:15:39.373 ], 00:15:39.373 "product_name": "passthru", 00:15:39.373 "block_size": 512, 00:15:39.373 "num_blocks": 65536, 00:15:39.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.373 "assigned_rate_limits": { 00:15:39.373 "rw_ios_per_sec": 0, 00:15:39.373 "rw_mbytes_per_sec": 0, 00:15:39.373 "r_mbytes_per_sec": 0, 00:15:39.373 "w_mbytes_per_sec": 0 00:15:39.373 }, 00:15:39.373 "claimed": true, 00:15:39.373 "claim_type": "exclusive_write", 00:15:39.373 "zoned": false, 00:15:39.373 "supported_io_types": { 00:15:39.373 "read": true, 00:15:39.373 "write": true, 00:15:39.373 "unmap": true, 00:15:39.373 "flush": true, 00:15:39.373 "reset": true, 00:15:39.373 "nvme_admin": false, 00:15:39.373 "nvme_io": false, 00:15:39.373 "nvme_io_md": false, 00:15:39.373 "write_zeroes": true, 00:15:39.373 "zcopy": true, 00:15:39.373 "get_zone_info": false, 00:15:39.373 "zone_management": false, 00:15:39.373 "zone_append": false, 00:15:39.373 "compare": false, 00:15:39.373 "compare_and_write": false, 00:15:39.373 "abort": true, 00:15:39.373 "seek_hole": false, 00:15:39.373 "seek_data": false, 00:15:39.373 "copy": true, 00:15:39.373 "nvme_iov_md": false 00:15:39.373 }, 00:15:39.373 "memory_domains": [ 00:15:39.373 { 00:15:39.373 "dma_device_id": "system", 00:15:39.373 "dma_device_type": 1 00:15:39.373 }, 00:15:39.373 { 00:15:39.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.373 "dma_device_type": 2 00:15:39.373 } 00:15:39.373 ], 00:15:39.373 "driver_specific": { 00:15:39.373 "passthru": { 00:15:39.373 "name": "pt2", 00:15:39.373 "base_bdev_name": "malloc2" 00:15:39.373 } 00:15:39.373 } 00:15:39.373 }' 00:15:39.373 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.373 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.373 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.373 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.373 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:39.631 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:39.900 "name": "pt3", 00:15:39.900 "aliases": [ 00:15:39.900 "00000000-0000-0000-0000-000000000003" 00:15:39.900 ], 00:15:39.900 "product_name": "passthru", 00:15:39.900 "block_size": 512, 00:15:39.900 "num_blocks": 65536, 00:15:39.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.900 "assigned_rate_limits": { 00:15:39.900 "rw_ios_per_sec": 0, 00:15:39.900 "rw_mbytes_per_sec": 0, 00:15:39.900 "r_mbytes_per_sec": 0, 00:15:39.900 "w_mbytes_per_sec": 0 00:15:39.900 }, 00:15:39.900 "claimed": true, 00:15:39.900 "claim_type": "exclusive_write", 00:15:39.900 "zoned": false, 00:15:39.900 "supported_io_types": { 00:15:39.900 "read": true, 00:15:39.900 "write": true, 00:15:39.900 "unmap": true, 00:15:39.900 "flush": true, 00:15:39.900 "reset": true, 00:15:39.900 "nvme_admin": false, 00:15:39.900 "nvme_io": false, 00:15:39.900 "nvme_io_md": false, 00:15:39.900 "write_zeroes": true, 00:15:39.900 "zcopy": true, 00:15:39.900 "get_zone_info": false, 00:15:39.900 "zone_management": false, 00:15:39.900 "zone_append": false, 00:15:39.900 "compare": false, 00:15:39.900 "compare_and_write": false, 00:15:39.900 "abort": true, 00:15:39.900 "seek_hole": false, 00:15:39.900 "seek_data": false, 00:15:39.900 "copy": true, 00:15:39.900 "nvme_iov_md": false 00:15:39.900 }, 00:15:39.900 "memory_domains": [ 00:15:39.900 { 00:15:39.900 "dma_device_id": "system", 00:15:39.900 "dma_device_type": 1 00:15:39.900 }, 00:15:39.900 { 00:15:39.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.900 "dma_device_type": 2 00:15:39.900 } 00:15:39.900 ], 00:15:39.900 "driver_specific": { 00:15:39.900 "passthru": { 00:15:39.900 "name": "pt3", 00:15:39.900 "base_bdev_name": "malloc3" 00:15:39.900 } 00:15:39.900 } 00:15:39.900 }' 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.900 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.159 "name": "pt4", 00:15:40.159 "aliases": [ 00:15:40.159 "00000000-0000-0000-0000-000000000004" 00:15:40.159 ], 00:15:40.159 "product_name": "passthru", 00:15:40.159 "block_size": 512, 00:15:40.159 "num_blocks": 65536, 00:15:40.159 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.159 "assigned_rate_limits": { 00:15:40.159 "rw_ios_per_sec": 0, 00:15:40.159 "rw_mbytes_per_sec": 0, 00:15:40.159 "r_mbytes_per_sec": 0, 00:15:40.159 "w_mbytes_per_sec": 0 00:15:40.159 }, 00:15:40.159 "claimed": true, 00:15:40.159 "claim_type": "exclusive_write", 00:15:40.159 "zoned": false, 00:15:40.159 "supported_io_types": { 00:15:40.159 "read": true, 00:15:40.159 "write": true, 00:15:40.159 "unmap": true, 00:15:40.159 "flush": true, 00:15:40.159 "reset": true, 00:15:40.159 "nvme_admin": false, 00:15:40.159 "nvme_io": false, 00:15:40.159 "nvme_io_md": false, 00:15:40.159 "write_zeroes": true, 00:15:40.159 "zcopy": true, 00:15:40.159 "get_zone_info": false, 00:15:40.159 "zone_management": false, 00:15:40.159 "zone_append": false, 00:15:40.159 "compare": false, 00:15:40.159 "compare_and_write": false, 00:15:40.159 "abort": true, 00:15:40.159 "seek_hole": false, 00:15:40.159 "seek_data": false, 00:15:40.159 "copy": true, 00:15:40.159 "nvme_iov_md": false 00:15:40.159 }, 00:15:40.159 "memory_domains": [ 00:15:40.159 { 00:15:40.159 "dma_device_id": "system", 00:15:40.159 "dma_device_type": 1 00:15:40.159 }, 00:15:40.159 { 00:15:40.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.159 "dma_device_type": 2 00:15:40.159 } 00:15:40.159 ], 00:15:40.159 "driver_specific": { 00:15:40.159 "passthru": { 00:15:40.159 "name": "pt4", 00:15:40.159 "base_bdev_name": "malloc4" 00:15:40.159 } 00:15:40.159 } 00:15:40.159 }' 00:15:40.159 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.416 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.416 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.416 10:23:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.416 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:40.674 [2024-07-15 10:23:05.371091] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2d0dec23-be36-47bb-9e8c-f3abbf94b5e2 '!=' 2d0dec23-be36-47bb-9e8c-f3abbf94b5e2 ']' 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1813565 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1813565 ']' 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1813565 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1813565 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1813565' 00:15:40.674 killing process with pid 1813565 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1813565 00:15:40.674 [2024-07-15 10:23:05.446248] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.674 [2024-07-15 10:23:05.446294] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.674 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1813565 00:15:40.674 [2024-07-15 10:23:05.446338] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.674 [2024-07-15 10:23:05.446350] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x213bbc0 name raid_bdev1, state offline 00:15:40.931 [2024-07-15 10:23:05.478044] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.931 10:23:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:40.931 00:15:40.931 real 0m12.241s 00:15:40.931 user 0m21.852s 00:15:40.931 sys 0m2.385s 00:15:40.931 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.931 10:23:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.931 ************************************ 00:15:40.931 END TEST raid_superblock_test 00:15:40.931 ************************************ 00:15:40.931 10:23:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:40.931 10:23:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:40.931 10:23:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:40.931 10:23:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.931 10:23:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.189 ************************************ 00:15:41.189 START TEST raid_read_error_test 00:15:41.189 ************************************ 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vtfZSFOrk5 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1815939 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1815939 /var/tmp/spdk-raid.sock 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1815939 ']' 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.189 10:23:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:41.189 [2024-07-15 10:23:05.788087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:41.189 [2024-07-15 10:23:05.788130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815939 ] 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:41.189 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:41.189 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:41.189 [2024-07-15 10:23:05.879203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.189 [2024-07-15 10:23:05.952300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.447 [2024-07-15 10:23:06.006011] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.447 [2024-07-15 10:23:06.006038] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.011 10:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.011 10:23:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:42.011 10:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:42.011 10:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:42.011 BaseBdev1_malloc 00:15:42.011 10:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:42.268 true 00:15:42.268 10:23:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:42.268 [2024-07-15 10:23:07.025821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:42.268 [2024-07-15 10:23:07.025857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.268 [2024-07-15 10:23:07.025871] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1090190 00:15:42.268 [2024-07-15 10:23:07.025879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.268 [2024-07-15 10:23:07.027093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.268 [2024-07-15 10:23:07.027115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:42.268 BaseBdev1 00:15:42.268 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:42.268 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:42.525 BaseBdev2_malloc 00:15:42.525 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:42.783 true 00:15:42.783 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:42.783 [2024-07-15 10:23:07.510700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:42.783 [2024-07-15 10:23:07.510730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.783 [2024-07-15 10:23:07.510745] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1094e20 00:15:42.783 [2024-07-15 10:23:07.510757] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.783 [2024-07-15 10:23:07.511740] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.783 [2024-07-15 10:23:07.511761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:42.783 BaseBdev2 00:15:42.783 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:42.783 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:43.041 BaseBdev3_malloc 00:15:43.041 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:43.299 true 00:15:43.299 10:23:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:43.299 [2024-07-15 10:23:08.003642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:43.299 [2024-07-15 10:23:08.003673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.299 [2024-07-15 10:23:08.003688] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1095d90 00:15:43.299 [2024-07-15 10:23:08.003697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.299 [2024-07-15 10:23:08.004685] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.299 [2024-07-15 10:23:08.004706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:43.299 BaseBdev3 00:15:43.299 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:43.299 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:43.557 BaseBdev4_malloc 00:15:43.557 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:43.557 true 00:15:43.557 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:43.815 [2024-07-15 10:23:08.472281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:43.815 [2024-07-15 10:23:08.472312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.815 [2024-07-15 10:23:08.472326] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1098000 00:15:43.815 [2024-07-15 10:23:08.472334] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.815 [2024-07-15 10:23:08.473302] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.815 [2024-07-15 10:23:08.473325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:43.815 BaseBdev4 00:15:43.815 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:44.072 [2024-07-15 10:23:08.640750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.072 [2024-07-15 10:23:08.641599] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.072 [2024-07-15 10:23:08.641643] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.072 [2024-07-15 10:23:08.641678] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.072 [2024-07-15 10:23:08.641827] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1098dd0 00:15:44.072 [2024-07-15 10:23:08.641834] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:44.072 [2024-07-15 10:23:08.641973] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x109a080 00:15:44.072 [2024-07-15 10:23:08.642072] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1098dd0 00:15:44.072 [2024-07-15 10:23:08.642078] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1098dd0 00:15:44.072 [2024-07-15 10:23:08.642142] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.072 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.073 "name": "raid_bdev1", 00:15:44.073 "uuid": "0dbdbb26-c76b-440d-872f-1b15facdf647", 00:15:44.073 "strip_size_kb": 64, 00:15:44.073 "state": "online", 00:15:44.073 "raid_level": "raid0", 00:15:44.073 "superblock": true, 00:15:44.073 "num_base_bdevs": 4, 00:15:44.073 "num_base_bdevs_discovered": 4, 00:15:44.073 "num_base_bdevs_operational": 4, 00:15:44.073 "base_bdevs_list": [ 00:15:44.073 { 00:15:44.073 "name": "BaseBdev1", 00:15:44.073 "uuid": "f02ebc5d-3be2-5490-9147-5427c5c2c614", 00:15:44.073 "is_configured": true, 00:15:44.073 "data_offset": 2048, 00:15:44.073 "data_size": 63488 00:15:44.073 }, 00:15:44.073 { 00:15:44.073 "name": "BaseBdev2", 00:15:44.073 "uuid": "23d70bbd-6858-5cc5-b9e5-70e9dc3bc344", 00:15:44.073 "is_configured": true, 00:15:44.073 "data_offset": 2048, 00:15:44.073 "data_size": 63488 00:15:44.073 }, 00:15:44.073 { 00:15:44.073 "name": "BaseBdev3", 00:15:44.073 "uuid": "5a92cf2f-ef2e-53ea-86d7-05d85a6dcb8e", 00:15:44.073 "is_configured": true, 00:15:44.073 "data_offset": 2048, 00:15:44.073 "data_size": 63488 00:15:44.073 }, 00:15:44.073 { 00:15:44.073 "name": "BaseBdev4", 00:15:44.073 "uuid": "d6353e9d-dd8f-5e34-8096-e6e6086c577f", 00:15:44.073 "is_configured": true, 00:15:44.073 "data_offset": 2048, 00:15:44.073 "data_size": 63488 00:15:44.073 } 00:15:44.073 ] 00:15:44.073 }' 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.073 10:23:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.637 10:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:44.637 10:23:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:44.637 [2024-07-15 10:23:09.382842] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeebef0 00:15:45.632 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.891 "name": "raid_bdev1", 00:15:45.891 "uuid": "0dbdbb26-c76b-440d-872f-1b15facdf647", 00:15:45.891 "strip_size_kb": 64, 00:15:45.891 "state": "online", 00:15:45.891 "raid_level": "raid0", 00:15:45.891 "superblock": true, 00:15:45.891 "num_base_bdevs": 4, 00:15:45.891 "num_base_bdevs_discovered": 4, 00:15:45.891 "num_base_bdevs_operational": 4, 00:15:45.891 "base_bdevs_list": [ 00:15:45.891 { 00:15:45.891 "name": "BaseBdev1", 00:15:45.891 "uuid": "f02ebc5d-3be2-5490-9147-5427c5c2c614", 00:15:45.891 "is_configured": true, 00:15:45.891 "data_offset": 2048, 00:15:45.891 "data_size": 63488 00:15:45.891 }, 00:15:45.891 { 00:15:45.891 "name": "BaseBdev2", 00:15:45.891 "uuid": "23d70bbd-6858-5cc5-b9e5-70e9dc3bc344", 00:15:45.891 "is_configured": true, 00:15:45.891 "data_offset": 2048, 00:15:45.891 "data_size": 63488 00:15:45.891 }, 00:15:45.891 { 00:15:45.891 "name": "BaseBdev3", 00:15:45.891 "uuid": "5a92cf2f-ef2e-53ea-86d7-05d85a6dcb8e", 00:15:45.891 "is_configured": true, 00:15:45.891 "data_offset": 2048, 00:15:45.891 "data_size": 63488 00:15:45.891 }, 00:15:45.891 { 00:15:45.891 "name": "BaseBdev4", 00:15:45.891 "uuid": "d6353e9d-dd8f-5e34-8096-e6e6086c577f", 00:15:45.891 "is_configured": true, 00:15:45.891 "data_offset": 2048, 00:15:45.891 "data_size": 63488 00:15:45.891 } 00:15:45.891 ] 00:15:45.891 }' 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.891 10:23:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.457 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:46.716 [2024-07-15 10:23:11.303089] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.716 [2024-07-15 10:23:11.303131] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.716 [2024-07-15 10:23:11.305073] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.716 [2024-07-15 10:23:11.305098] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.716 [2024-07-15 10:23:11.305122] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.716 [2024-07-15 10:23:11.305129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1098dd0 name raid_bdev1, state offline 00:15:46.716 0 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1815939 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1815939 ']' 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1815939 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1815939 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1815939' 00:15:46.716 killing process with pid 1815939 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1815939 00:15:46.716 [2024-07-15 10:23:11.372940] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.716 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1815939 00:15:46.716 [2024-07-15 10:23:11.398378] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vtfZSFOrk5 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:46.975 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:46.976 10:23:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:15:46.976 00:15:46.976 real 0m5.863s 00:15:46.976 user 0m9.047s 00:15:46.976 sys 0m0.986s 00:15:46.976 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.976 10:23:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.976 ************************************ 00:15:46.976 END TEST raid_read_error_test 00:15:46.976 ************************************ 00:15:46.976 10:23:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:46.976 10:23:11 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:46.976 10:23:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:46.976 10:23:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.976 10:23:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.976 ************************************ 00:15:46.976 START TEST raid_write_error_test 00:15:46.976 ************************************ 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.wc0gCxOdSn 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1816943 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1816943 /var/tmp/spdk-raid.sock 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1816943 ']' 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:46.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.976 10:23:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:46.976 [2024-07-15 10:23:11.723272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:46.976 [2024-07-15 10:23:11.723315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816943 ] 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:47.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:47.236 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:47.236 [2024-07-15 10:23:11.813334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.236 [2024-07-15 10:23:11.886650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.236 [2024-07-15 10:23:11.939028] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.236 [2024-07-15 10:23:11.939051] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.804 10:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.804 10:23:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:47.804 10:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:47.804 10:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.064 BaseBdev1_malloc 00:15:48.064 10:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:48.064 true 00:15:48.064 10:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:48.323 [2024-07-15 10:23:12.983079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:48.323 [2024-07-15 10:23:12.983112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.323 [2024-07-15 10:23:12.983126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x133f190 00:15:48.323 [2024-07-15 10:23:12.983133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.323 [2024-07-15 10:23:12.984272] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.323 [2024-07-15 10:23:12.984293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.323 BaseBdev1 00:15:48.324 10:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:48.324 10:23:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.593 BaseBdev2_malloc 00:15:48.593 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:48.593 true 00:15:48.593 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:48.857 [2024-07-15 10:23:13.459776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:48.857 [2024-07-15 10:23:13.459806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.857 [2024-07-15 10:23:13.459820] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1343e20 00:15:48.857 [2024-07-15 10:23:13.459828] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.857 [2024-07-15 10:23:13.460814] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.857 [2024-07-15 10:23:13.460835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.857 BaseBdev2 00:15:48.857 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:48.857 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.857 BaseBdev3_malloc 00:15:48.857 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:49.115 true 00:15:49.115 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:49.375 [2024-07-15 10:23:13.932406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:49.375 [2024-07-15 10:23:13.932437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.375 [2024-07-15 10:23:13.932453] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1344d90 00:15:49.375 [2024-07-15 10:23:13.932461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.375 [2024-07-15 10:23:13.933455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.375 [2024-07-15 10:23:13.933475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:49.375 BaseBdev3 00:15:49.375 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:49.375 10:23:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:49.375 BaseBdev4_malloc 00:15:49.375 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:49.633 true 00:15:49.633 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:49.633 [2024-07-15 10:23:14.413152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:49.633 [2024-07-15 10:23:14.413185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.633 [2024-07-15 10:23:14.413200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1347000 00:15:49.633 [2024-07-15 10:23:14.413208] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.633 [2024-07-15 10:23:14.414258] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.633 [2024-07-15 10:23:14.414280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:49.633 BaseBdev4 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:49.892 [2024-07-15 10:23:14.565566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.892 [2024-07-15 10:23:14.566396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.892 [2024-07-15 10:23:14.566441] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.892 [2024-07-15 10:23:14.566476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.892 [2024-07-15 10:23:14.566622] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1347dd0 00:15:49.892 [2024-07-15 10:23:14.566629] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:49.892 [2024-07-15 10:23:14.566752] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1349080 00:15:49.892 [2024-07-15 10:23:14.566845] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1347dd0 00:15:49.892 [2024-07-15 10:23:14.566852] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1347dd0 00:15:49.892 [2024-07-15 10:23:14.566922] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.892 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.152 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.152 "name": "raid_bdev1", 00:15:50.152 "uuid": "eba194bc-361a-44d6-a64b-a56b6ef4a6ea", 00:15:50.152 "strip_size_kb": 64, 00:15:50.152 "state": "online", 00:15:50.152 "raid_level": "raid0", 00:15:50.152 "superblock": true, 00:15:50.152 "num_base_bdevs": 4, 00:15:50.152 "num_base_bdevs_discovered": 4, 00:15:50.152 "num_base_bdevs_operational": 4, 00:15:50.152 "base_bdevs_list": [ 00:15:50.152 { 00:15:50.152 "name": "BaseBdev1", 00:15:50.152 "uuid": "a50ec71b-401e-5165-89eb-8b6b8178f532", 00:15:50.152 "is_configured": true, 00:15:50.152 "data_offset": 2048, 00:15:50.152 "data_size": 63488 00:15:50.152 }, 00:15:50.152 { 00:15:50.152 "name": "BaseBdev2", 00:15:50.152 "uuid": "28a160de-6f34-5113-8f5e-d7dd8a193763", 00:15:50.152 "is_configured": true, 00:15:50.152 "data_offset": 2048, 00:15:50.152 "data_size": 63488 00:15:50.152 }, 00:15:50.152 { 00:15:50.152 "name": "BaseBdev3", 00:15:50.152 "uuid": "6627e7eb-d4d9-5ccf-beb1-22acb01960a1", 00:15:50.152 "is_configured": true, 00:15:50.152 "data_offset": 2048, 00:15:50.152 "data_size": 63488 00:15:50.152 }, 00:15:50.152 { 00:15:50.152 "name": "BaseBdev4", 00:15:50.152 "uuid": "fa7b64d8-b0f7-560d-a973-ae02bbd50ad4", 00:15:50.152 "is_configured": true, 00:15:50.152 "data_offset": 2048, 00:15:50.152 "data_size": 63488 00:15:50.152 } 00:15:50.152 ] 00:15:50.152 }' 00:15:50.152 10:23:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.152 10:23:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.720 10:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:50.720 10:23:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:50.720 [2024-07-15 10:23:15.315684] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x119aef0 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.658 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.917 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.917 "name": "raid_bdev1", 00:15:51.917 "uuid": "eba194bc-361a-44d6-a64b-a56b6ef4a6ea", 00:15:51.917 "strip_size_kb": 64, 00:15:51.917 "state": "online", 00:15:51.917 "raid_level": "raid0", 00:15:51.917 "superblock": true, 00:15:51.917 "num_base_bdevs": 4, 00:15:51.917 "num_base_bdevs_discovered": 4, 00:15:51.917 "num_base_bdevs_operational": 4, 00:15:51.917 "base_bdevs_list": [ 00:15:51.917 { 00:15:51.917 "name": "BaseBdev1", 00:15:51.917 "uuid": "a50ec71b-401e-5165-89eb-8b6b8178f532", 00:15:51.917 "is_configured": true, 00:15:51.917 "data_offset": 2048, 00:15:51.917 "data_size": 63488 00:15:51.917 }, 00:15:51.917 { 00:15:51.917 "name": "BaseBdev2", 00:15:51.917 "uuid": "28a160de-6f34-5113-8f5e-d7dd8a193763", 00:15:51.917 "is_configured": true, 00:15:51.917 "data_offset": 2048, 00:15:51.917 "data_size": 63488 00:15:51.917 }, 00:15:51.917 { 00:15:51.917 "name": "BaseBdev3", 00:15:51.917 "uuid": "6627e7eb-d4d9-5ccf-beb1-22acb01960a1", 00:15:51.917 "is_configured": true, 00:15:51.917 "data_offset": 2048, 00:15:51.917 "data_size": 63488 00:15:51.917 }, 00:15:51.917 { 00:15:51.917 "name": "BaseBdev4", 00:15:51.917 "uuid": "fa7b64d8-b0f7-560d-a973-ae02bbd50ad4", 00:15:51.917 "is_configured": true, 00:15:51.917 "data_offset": 2048, 00:15:51.917 "data_size": 63488 00:15:51.917 } 00:15:51.917 ] 00:15:51.917 }' 00:15:51.917 10:23:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.917 10:23:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:52.485 [2024-07-15 10:23:17.231889] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.485 [2024-07-15 10:23:17.231924] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.485 [2024-07-15 10:23:17.233961] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.485 [2024-07-15 10:23:17.233986] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.485 [2024-07-15 10:23:17.234012] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.485 [2024-07-15 10:23:17.234019] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1347dd0 name raid_bdev1, state offline 00:15:52.485 0 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1816943 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1816943 ']' 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1816943 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.485 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1816943 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1816943' 00:15:52.745 killing process with pid 1816943 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1816943 00:15:52.745 [2024-07-15 10:23:17.307533] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1816943 00:15:52.745 [2024-07-15 10:23:17.333924] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.wc0gCxOdSn 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:15:52.745 00:15:52.745 real 0m5.860s 00:15:52.745 user 0m9.057s 00:15:52.745 sys 0m0.992s 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.745 10:23:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.745 ************************************ 00:15:52.745 END TEST raid_write_error_test 00:15:52.745 ************************************ 00:15:53.005 10:23:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:53.005 10:23:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:53.005 10:23:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:53.005 10:23:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:53.005 10:23:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.005 10:23:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.005 ************************************ 00:15:53.005 START TEST raid_state_function_test 00:15:53.005 ************************************ 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1818087 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1818087' 00:15:53.005 Process raid pid: 1818087 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1818087 /var/tmp/spdk-raid.sock 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1818087 ']' 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:53.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.005 10:23:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.005 [2024-07-15 10:23:17.673343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:53.005 [2024-07-15 10:23:17.673391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:53.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:53.005 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:53.005 [2024-07-15 10:23:17.764543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.265 [2024-07-15 10:23:17.838047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.265 [2024-07-15 10:23:17.888618] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.265 [2024-07-15 10:23:17.888647] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.832 10:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.832 10:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:53.832 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:54.091 [2024-07-15 10:23:18.635506] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.091 [2024-07-15 10:23:18.635546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.091 [2024-07-15 10:23:18.635554] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.092 [2024-07-15 10:23:18.635561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.092 [2024-07-15 10:23:18.635567] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.092 [2024-07-15 10:23:18.635574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.092 [2024-07-15 10:23:18.635579] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.092 [2024-07-15 10:23:18.635586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.092 "name": "Existed_Raid", 00:15:54.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.092 "strip_size_kb": 64, 00:15:54.092 "state": "configuring", 00:15:54.092 "raid_level": "concat", 00:15:54.092 "superblock": false, 00:15:54.092 "num_base_bdevs": 4, 00:15:54.092 "num_base_bdevs_discovered": 0, 00:15:54.092 "num_base_bdevs_operational": 4, 00:15:54.092 "base_bdevs_list": [ 00:15:54.092 { 00:15:54.092 "name": "BaseBdev1", 00:15:54.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.092 "is_configured": false, 00:15:54.092 "data_offset": 0, 00:15:54.092 "data_size": 0 00:15:54.092 }, 00:15:54.092 { 00:15:54.092 "name": "BaseBdev2", 00:15:54.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.092 "is_configured": false, 00:15:54.092 "data_offset": 0, 00:15:54.092 "data_size": 0 00:15:54.092 }, 00:15:54.092 { 00:15:54.092 "name": "BaseBdev3", 00:15:54.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.092 "is_configured": false, 00:15:54.092 "data_offset": 0, 00:15:54.092 "data_size": 0 00:15:54.092 }, 00:15:54.092 { 00:15:54.092 "name": "BaseBdev4", 00:15:54.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.092 "is_configured": false, 00:15:54.092 "data_offset": 0, 00:15:54.092 "data_size": 0 00:15:54.092 } 00:15:54.092 ] 00:15:54.092 }' 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.092 10:23:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.660 10:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.919 [2024-07-15 10:23:19.469579] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.919 [2024-07-15 10:23:19.469601] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14c9f60 name Existed_Raid, state configuring 00:15:54.919 10:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:54.919 [2024-07-15 10:23:19.646047] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.919 [2024-07-15 10:23:19.646071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.919 [2024-07-15 10:23:19.646077] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.919 [2024-07-15 10:23:19.646084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.919 [2024-07-15 10:23:19.646090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.919 [2024-07-15 10:23:19.646097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.919 [2024-07-15 10:23:19.646103] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.919 [2024-07-15 10:23:19.646110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.919 10:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.179 [2024-07-15 10:23:19.831003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.179 BaseBdev1 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:55.179 10:23:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.438 [ 00:15:55.438 { 00:15:55.438 "name": "BaseBdev1", 00:15:55.438 "aliases": [ 00:15:55.438 "690ab036-5522-4348-853c-b7aea8f0bd30" 00:15:55.438 ], 00:15:55.438 "product_name": "Malloc disk", 00:15:55.438 "block_size": 512, 00:15:55.438 "num_blocks": 65536, 00:15:55.438 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:15:55.438 "assigned_rate_limits": { 00:15:55.438 "rw_ios_per_sec": 0, 00:15:55.438 "rw_mbytes_per_sec": 0, 00:15:55.438 "r_mbytes_per_sec": 0, 00:15:55.438 "w_mbytes_per_sec": 0 00:15:55.438 }, 00:15:55.438 "claimed": true, 00:15:55.438 "claim_type": "exclusive_write", 00:15:55.438 "zoned": false, 00:15:55.438 "supported_io_types": { 00:15:55.438 "read": true, 00:15:55.438 "write": true, 00:15:55.438 "unmap": true, 00:15:55.438 "flush": true, 00:15:55.438 "reset": true, 00:15:55.438 "nvme_admin": false, 00:15:55.438 "nvme_io": false, 00:15:55.438 "nvme_io_md": false, 00:15:55.438 "write_zeroes": true, 00:15:55.438 "zcopy": true, 00:15:55.438 "get_zone_info": false, 00:15:55.438 "zone_management": false, 00:15:55.438 "zone_append": false, 00:15:55.438 "compare": false, 00:15:55.438 "compare_and_write": false, 00:15:55.438 "abort": true, 00:15:55.438 "seek_hole": false, 00:15:55.438 "seek_data": false, 00:15:55.438 "copy": true, 00:15:55.438 "nvme_iov_md": false 00:15:55.438 }, 00:15:55.438 "memory_domains": [ 00:15:55.438 { 00:15:55.438 "dma_device_id": "system", 00:15:55.438 "dma_device_type": 1 00:15:55.438 }, 00:15:55.438 { 00:15:55.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.438 "dma_device_type": 2 00:15:55.438 } 00:15:55.438 ], 00:15:55.438 "driver_specific": {} 00:15:55.438 } 00:15:55.438 ] 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.438 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.698 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.698 "name": "Existed_Raid", 00:15:55.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.698 "strip_size_kb": 64, 00:15:55.698 "state": "configuring", 00:15:55.698 "raid_level": "concat", 00:15:55.698 "superblock": false, 00:15:55.698 "num_base_bdevs": 4, 00:15:55.698 "num_base_bdevs_discovered": 1, 00:15:55.698 "num_base_bdevs_operational": 4, 00:15:55.698 "base_bdevs_list": [ 00:15:55.698 { 00:15:55.698 "name": "BaseBdev1", 00:15:55.698 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:15:55.698 "is_configured": true, 00:15:55.698 "data_offset": 0, 00:15:55.698 "data_size": 65536 00:15:55.698 }, 00:15:55.698 { 00:15:55.698 "name": "BaseBdev2", 00:15:55.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.698 "is_configured": false, 00:15:55.698 "data_offset": 0, 00:15:55.698 "data_size": 0 00:15:55.698 }, 00:15:55.698 { 00:15:55.698 "name": "BaseBdev3", 00:15:55.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.698 "is_configured": false, 00:15:55.698 "data_offset": 0, 00:15:55.698 "data_size": 0 00:15:55.698 }, 00:15:55.698 { 00:15:55.698 "name": "BaseBdev4", 00:15:55.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.698 "is_configured": false, 00:15:55.698 "data_offset": 0, 00:15:55.698 "data_size": 0 00:15:55.698 } 00:15:55.698 ] 00:15:55.698 }' 00:15:55.698 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.698 10:23:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.267 10:23:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:56.268 [2024-07-15 10:23:21.022077] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:56.268 [2024-07-15 10:23:21.022108] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14c97d0 name Existed_Raid, state configuring 00:15:56.268 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:56.527 [2024-07-15 10:23:21.190539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.527 [2024-07-15 10:23:21.191622] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.527 [2024-07-15 10:23:21.191651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.527 [2024-07-15 10:23:21.191658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.527 [2024-07-15 10:23:21.191665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.527 [2024-07-15 10:23:21.191671] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.527 [2024-07-15 10:23:21.191678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.527 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.786 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.786 "name": "Existed_Raid", 00:15:56.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.786 "strip_size_kb": 64, 00:15:56.786 "state": "configuring", 00:15:56.786 "raid_level": "concat", 00:15:56.786 "superblock": false, 00:15:56.786 "num_base_bdevs": 4, 00:15:56.786 "num_base_bdevs_discovered": 1, 00:15:56.786 "num_base_bdevs_operational": 4, 00:15:56.786 "base_bdevs_list": [ 00:15:56.786 { 00:15:56.786 "name": "BaseBdev1", 00:15:56.786 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:15:56.786 "is_configured": true, 00:15:56.786 "data_offset": 0, 00:15:56.786 "data_size": 65536 00:15:56.786 }, 00:15:56.786 { 00:15:56.786 "name": "BaseBdev2", 00:15:56.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.786 "is_configured": false, 00:15:56.786 "data_offset": 0, 00:15:56.786 "data_size": 0 00:15:56.786 }, 00:15:56.786 { 00:15:56.786 "name": "BaseBdev3", 00:15:56.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.786 "is_configured": false, 00:15:56.786 "data_offset": 0, 00:15:56.786 "data_size": 0 00:15:56.786 }, 00:15:56.786 { 00:15:56.786 "name": "BaseBdev4", 00:15:56.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.786 "is_configured": false, 00:15:56.786 "data_offset": 0, 00:15:56.786 "data_size": 0 00:15:56.786 } 00:15:56.786 ] 00:15:56.786 }' 00:15:56.786 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.786 10:23:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.354 10:23:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:57.354 [2024-07-15 10:23:22.035486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.354 BaseBdev2 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:57.354 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:57.614 [ 00:15:57.614 { 00:15:57.614 "name": "BaseBdev2", 00:15:57.614 "aliases": [ 00:15:57.614 "cf0503cd-619a-4ae3-9ea5-fa17de01991b" 00:15:57.614 ], 00:15:57.614 "product_name": "Malloc disk", 00:15:57.614 "block_size": 512, 00:15:57.614 "num_blocks": 65536, 00:15:57.614 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:15:57.614 "assigned_rate_limits": { 00:15:57.614 "rw_ios_per_sec": 0, 00:15:57.614 "rw_mbytes_per_sec": 0, 00:15:57.614 "r_mbytes_per_sec": 0, 00:15:57.614 "w_mbytes_per_sec": 0 00:15:57.614 }, 00:15:57.614 "claimed": true, 00:15:57.614 "claim_type": "exclusive_write", 00:15:57.614 "zoned": false, 00:15:57.614 "supported_io_types": { 00:15:57.614 "read": true, 00:15:57.614 "write": true, 00:15:57.614 "unmap": true, 00:15:57.614 "flush": true, 00:15:57.614 "reset": true, 00:15:57.614 "nvme_admin": false, 00:15:57.614 "nvme_io": false, 00:15:57.614 "nvme_io_md": false, 00:15:57.614 "write_zeroes": true, 00:15:57.614 "zcopy": true, 00:15:57.614 "get_zone_info": false, 00:15:57.614 "zone_management": false, 00:15:57.614 "zone_append": false, 00:15:57.614 "compare": false, 00:15:57.614 "compare_and_write": false, 00:15:57.614 "abort": true, 00:15:57.614 "seek_hole": false, 00:15:57.614 "seek_data": false, 00:15:57.614 "copy": true, 00:15:57.614 "nvme_iov_md": false 00:15:57.614 }, 00:15:57.614 "memory_domains": [ 00:15:57.614 { 00:15:57.614 "dma_device_id": "system", 00:15:57.614 "dma_device_type": 1 00:15:57.614 }, 00:15:57.614 { 00:15:57.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.614 "dma_device_type": 2 00:15:57.614 } 00:15:57.614 ], 00:15:57.614 "driver_specific": {} 00:15:57.614 } 00:15:57.614 ] 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.614 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.873 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.873 "name": "Existed_Raid", 00:15:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.873 "strip_size_kb": 64, 00:15:57.873 "state": "configuring", 00:15:57.873 "raid_level": "concat", 00:15:57.873 "superblock": false, 00:15:57.873 "num_base_bdevs": 4, 00:15:57.873 "num_base_bdevs_discovered": 2, 00:15:57.873 "num_base_bdevs_operational": 4, 00:15:57.873 "base_bdevs_list": [ 00:15:57.873 { 00:15:57.873 "name": "BaseBdev1", 00:15:57.873 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:15:57.873 "is_configured": true, 00:15:57.873 "data_offset": 0, 00:15:57.873 "data_size": 65536 00:15:57.873 }, 00:15:57.873 { 00:15:57.873 "name": "BaseBdev2", 00:15:57.873 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:15:57.873 "is_configured": true, 00:15:57.873 "data_offset": 0, 00:15:57.873 "data_size": 65536 00:15:57.873 }, 00:15:57.873 { 00:15:57.873 "name": "BaseBdev3", 00:15:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.873 "is_configured": false, 00:15:57.873 "data_offset": 0, 00:15:57.873 "data_size": 0 00:15:57.873 }, 00:15:57.873 { 00:15:57.873 "name": "BaseBdev4", 00:15:57.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.873 "is_configured": false, 00:15:57.873 "data_offset": 0, 00:15:57.873 "data_size": 0 00:15:57.873 } 00:15:57.873 ] 00:15:57.873 }' 00:15:57.873 10:23:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.873 10:23:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:58.442 [2024-07-15 10:23:23.205191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:58.442 BaseBdev3 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:58.442 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:58.699 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:58.958 [ 00:15:58.958 { 00:15:58.958 "name": "BaseBdev3", 00:15:58.958 "aliases": [ 00:15:58.958 "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091" 00:15:58.958 ], 00:15:58.958 "product_name": "Malloc disk", 00:15:58.958 "block_size": 512, 00:15:58.958 "num_blocks": 65536, 00:15:58.958 "uuid": "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091", 00:15:58.958 "assigned_rate_limits": { 00:15:58.958 "rw_ios_per_sec": 0, 00:15:58.958 "rw_mbytes_per_sec": 0, 00:15:58.958 "r_mbytes_per_sec": 0, 00:15:58.958 "w_mbytes_per_sec": 0 00:15:58.958 }, 00:15:58.958 "claimed": true, 00:15:58.958 "claim_type": "exclusive_write", 00:15:58.958 "zoned": false, 00:15:58.958 "supported_io_types": { 00:15:58.958 "read": true, 00:15:58.958 "write": true, 00:15:58.958 "unmap": true, 00:15:58.958 "flush": true, 00:15:58.958 "reset": true, 00:15:58.958 "nvme_admin": false, 00:15:58.958 "nvme_io": false, 00:15:58.958 "nvme_io_md": false, 00:15:58.958 "write_zeroes": true, 00:15:58.958 "zcopy": true, 00:15:58.958 "get_zone_info": false, 00:15:58.958 "zone_management": false, 00:15:58.958 "zone_append": false, 00:15:58.958 "compare": false, 00:15:58.958 "compare_and_write": false, 00:15:58.958 "abort": true, 00:15:58.958 "seek_hole": false, 00:15:58.958 "seek_data": false, 00:15:58.958 "copy": true, 00:15:58.958 "nvme_iov_md": false 00:15:58.958 }, 00:15:58.958 "memory_domains": [ 00:15:58.958 { 00:15:58.958 "dma_device_id": "system", 00:15:58.958 "dma_device_type": 1 00:15:58.958 }, 00:15:58.958 { 00:15:58.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.958 "dma_device_type": 2 00:15:58.958 } 00:15:58.958 ], 00:15:58.958 "driver_specific": {} 00:15:58.958 } 00:15:58.958 ] 00:15:58.958 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:58.958 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:58.958 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:58.958 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.959 "name": "Existed_Raid", 00:15:58.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.959 "strip_size_kb": 64, 00:15:58.959 "state": "configuring", 00:15:58.959 "raid_level": "concat", 00:15:58.959 "superblock": false, 00:15:58.959 "num_base_bdevs": 4, 00:15:58.959 "num_base_bdevs_discovered": 3, 00:15:58.959 "num_base_bdevs_operational": 4, 00:15:58.959 "base_bdevs_list": [ 00:15:58.959 { 00:15:58.959 "name": "BaseBdev1", 00:15:58.959 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:15:58.959 "is_configured": true, 00:15:58.959 "data_offset": 0, 00:15:58.959 "data_size": 65536 00:15:58.959 }, 00:15:58.959 { 00:15:58.959 "name": "BaseBdev2", 00:15:58.959 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:15:58.959 "is_configured": true, 00:15:58.959 "data_offset": 0, 00:15:58.959 "data_size": 65536 00:15:58.959 }, 00:15:58.959 { 00:15:58.959 "name": "BaseBdev3", 00:15:58.959 "uuid": "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091", 00:15:58.959 "is_configured": true, 00:15:58.959 "data_offset": 0, 00:15:58.959 "data_size": 65536 00:15:58.959 }, 00:15:58.959 { 00:15:58.959 "name": "BaseBdev4", 00:15:58.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.959 "is_configured": false, 00:15:58.959 "data_offset": 0, 00:15:58.959 "data_size": 0 00:15:58.959 } 00:15:58.959 ] 00:15:58.959 }' 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.959 10:23:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.526 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.819 [2024-07-15 10:23:24.354791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.819 [2024-07-15 10:23:24.354818] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14ca830 00:15:59.819 [2024-07-15 10:23:24.354825] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:59.819 [2024-07-15 10:23:24.354962] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14c3160 00:15:59.819 [2024-07-15 10:23:24.355051] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14ca830 00:15:59.819 [2024-07-15 10:23:24.355056] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14ca830 00:15:59.819 [2024-07-15 10:23:24.355193] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.819 BaseBdev4 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.819 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:00.116 [ 00:16:00.116 { 00:16:00.116 "name": "BaseBdev4", 00:16:00.116 "aliases": [ 00:16:00.117 "7058bd2f-83b3-4997-bde8-05796843f6fe" 00:16:00.117 ], 00:16:00.117 "product_name": "Malloc disk", 00:16:00.117 "block_size": 512, 00:16:00.117 "num_blocks": 65536, 00:16:00.117 "uuid": "7058bd2f-83b3-4997-bde8-05796843f6fe", 00:16:00.117 "assigned_rate_limits": { 00:16:00.117 "rw_ios_per_sec": 0, 00:16:00.117 "rw_mbytes_per_sec": 0, 00:16:00.117 "r_mbytes_per_sec": 0, 00:16:00.117 "w_mbytes_per_sec": 0 00:16:00.117 }, 00:16:00.117 "claimed": true, 00:16:00.117 "claim_type": "exclusive_write", 00:16:00.117 "zoned": false, 00:16:00.117 "supported_io_types": { 00:16:00.117 "read": true, 00:16:00.117 "write": true, 00:16:00.117 "unmap": true, 00:16:00.117 "flush": true, 00:16:00.117 "reset": true, 00:16:00.117 "nvme_admin": false, 00:16:00.117 "nvme_io": false, 00:16:00.117 "nvme_io_md": false, 00:16:00.117 "write_zeroes": true, 00:16:00.117 "zcopy": true, 00:16:00.117 "get_zone_info": false, 00:16:00.117 "zone_management": false, 00:16:00.117 "zone_append": false, 00:16:00.117 "compare": false, 00:16:00.117 "compare_and_write": false, 00:16:00.117 "abort": true, 00:16:00.117 "seek_hole": false, 00:16:00.117 "seek_data": false, 00:16:00.117 "copy": true, 00:16:00.117 "nvme_iov_md": false 00:16:00.117 }, 00:16:00.117 "memory_domains": [ 00:16:00.117 { 00:16:00.117 "dma_device_id": "system", 00:16:00.117 "dma_device_type": 1 00:16:00.117 }, 00:16:00.117 { 00:16:00.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.117 "dma_device_type": 2 00:16:00.117 } 00:16:00.117 ], 00:16:00.117 "driver_specific": {} 00:16:00.117 } 00:16:00.117 ] 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.117 "name": "Existed_Raid", 00:16:00.117 "uuid": "56fb1eae-7d6e-4aa7-abc1-c099006fe79a", 00:16:00.117 "strip_size_kb": 64, 00:16:00.117 "state": "online", 00:16:00.117 "raid_level": "concat", 00:16:00.117 "superblock": false, 00:16:00.117 "num_base_bdevs": 4, 00:16:00.117 "num_base_bdevs_discovered": 4, 00:16:00.117 "num_base_bdevs_operational": 4, 00:16:00.117 "base_bdevs_list": [ 00:16:00.117 { 00:16:00.117 "name": "BaseBdev1", 00:16:00.117 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:16:00.117 "is_configured": true, 00:16:00.117 "data_offset": 0, 00:16:00.117 "data_size": 65536 00:16:00.117 }, 00:16:00.117 { 00:16:00.117 "name": "BaseBdev2", 00:16:00.117 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:16:00.117 "is_configured": true, 00:16:00.117 "data_offset": 0, 00:16:00.117 "data_size": 65536 00:16:00.117 }, 00:16:00.117 { 00:16:00.117 "name": "BaseBdev3", 00:16:00.117 "uuid": "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091", 00:16:00.117 "is_configured": true, 00:16:00.117 "data_offset": 0, 00:16:00.117 "data_size": 65536 00:16:00.117 }, 00:16:00.117 { 00:16:00.117 "name": "BaseBdev4", 00:16:00.117 "uuid": "7058bd2f-83b3-4997-bde8-05796843f6fe", 00:16:00.117 "is_configured": true, 00:16:00.117 "data_offset": 0, 00:16:00.117 "data_size": 65536 00:16:00.117 } 00:16:00.117 ] 00:16:00.117 }' 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.117 10:23:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:00.686 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:00.945 [2024-07-15 10:23:25.526040] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.945 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:00.945 "name": "Existed_Raid", 00:16:00.945 "aliases": [ 00:16:00.945 "56fb1eae-7d6e-4aa7-abc1-c099006fe79a" 00:16:00.945 ], 00:16:00.945 "product_name": "Raid Volume", 00:16:00.945 "block_size": 512, 00:16:00.945 "num_blocks": 262144, 00:16:00.945 "uuid": "56fb1eae-7d6e-4aa7-abc1-c099006fe79a", 00:16:00.945 "assigned_rate_limits": { 00:16:00.945 "rw_ios_per_sec": 0, 00:16:00.945 "rw_mbytes_per_sec": 0, 00:16:00.945 "r_mbytes_per_sec": 0, 00:16:00.945 "w_mbytes_per_sec": 0 00:16:00.945 }, 00:16:00.945 "claimed": false, 00:16:00.945 "zoned": false, 00:16:00.945 "supported_io_types": { 00:16:00.945 "read": true, 00:16:00.945 "write": true, 00:16:00.945 "unmap": true, 00:16:00.945 "flush": true, 00:16:00.945 "reset": true, 00:16:00.945 "nvme_admin": false, 00:16:00.945 "nvme_io": false, 00:16:00.945 "nvme_io_md": false, 00:16:00.945 "write_zeroes": true, 00:16:00.945 "zcopy": false, 00:16:00.945 "get_zone_info": false, 00:16:00.945 "zone_management": false, 00:16:00.945 "zone_append": false, 00:16:00.945 "compare": false, 00:16:00.945 "compare_and_write": false, 00:16:00.945 "abort": false, 00:16:00.945 "seek_hole": false, 00:16:00.945 "seek_data": false, 00:16:00.945 "copy": false, 00:16:00.945 "nvme_iov_md": false 00:16:00.945 }, 00:16:00.945 "memory_domains": [ 00:16:00.945 { 00:16:00.946 "dma_device_id": "system", 00:16:00.946 "dma_device_type": 1 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.946 "dma_device_type": 2 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "system", 00:16:00.946 "dma_device_type": 1 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.946 "dma_device_type": 2 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "system", 00:16:00.946 "dma_device_type": 1 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.946 "dma_device_type": 2 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "system", 00:16:00.946 "dma_device_type": 1 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.946 "dma_device_type": 2 00:16:00.946 } 00:16:00.946 ], 00:16:00.946 "driver_specific": { 00:16:00.946 "raid": { 00:16:00.946 "uuid": "56fb1eae-7d6e-4aa7-abc1-c099006fe79a", 00:16:00.946 "strip_size_kb": 64, 00:16:00.946 "state": "online", 00:16:00.946 "raid_level": "concat", 00:16:00.946 "superblock": false, 00:16:00.946 "num_base_bdevs": 4, 00:16:00.946 "num_base_bdevs_discovered": 4, 00:16:00.946 "num_base_bdevs_operational": 4, 00:16:00.946 "base_bdevs_list": [ 00:16:00.946 { 00:16:00.946 "name": "BaseBdev1", 00:16:00.946 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:16:00.946 "is_configured": true, 00:16:00.946 "data_offset": 0, 00:16:00.946 "data_size": 65536 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "name": "BaseBdev2", 00:16:00.946 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:16:00.946 "is_configured": true, 00:16:00.946 "data_offset": 0, 00:16:00.946 "data_size": 65536 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "name": "BaseBdev3", 00:16:00.946 "uuid": "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091", 00:16:00.946 "is_configured": true, 00:16:00.946 "data_offset": 0, 00:16:00.946 "data_size": 65536 00:16:00.946 }, 00:16:00.946 { 00:16:00.946 "name": "BaseBdev4", 00:16:00.946 "uuid": "7058bd2f-83b3-4997-bde8-05796843f6fe", 00:16:00.946 "is_configured": true, 00:16:00.946 "data_offset": 0, 00:16:00.946 "data_size": 65536 00:16:00.946 } 00:16:00.946 ] 00:16:00.946 } 00:16:00.946 } 00:16:00.946 }' 00:16:00.946 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.946 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:00.946 BaseBdev2 00:16:00.946 BaseBdev3 00:16:00.946 BaseBdev4' 00:16:00.946 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:00.946 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:00.946 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:01.206 "name": "BaseBdev1", 00:16:01.206 "aliases": [ 00:16:01.206 "690ab036-5522-4348-853c-b7aea8f0bd30" 00:16:01.206 ], 00:16:01.206 "product_name": "Malloc disk", 00:16:01.206 "block_size": 512, 00:16:01.206 "num_blocks": 65536, 00:16:01.206 "uuid": "690ab036-5522-4348-853c-b7aea8f0bd30", 00:16:01.206 "assigned_rate_limits": { 00:16:01.206 "rw_ios_per_sec": 0, 00:16:01.206 "rw_mbytes_per_sec": 0, 00:16:01.206 "r_mbytes_per_sec": 0, 00:16:01.206 "w_mbytes_per_sec": 0 00:16:01.206 }, 00:16:01.206 "claimed": true, 00:16:01.206 "claim_type": "exclusive_write", 00:16:01.206 "zoned": false, 00:16:01.206 "supported_io_types": { 00:16:01.206 "read": true, 00:16:01.206 "write": true, 00:16:01.206 "unmap": true, 00:16:01.206 "flush": true, 00:16:01.206 "reset": true, 00:16:01.206 "nvme_admin": false, 00:16:01.206 "nvme_io": false, 00:16:01.206 "nvme_io_md": false, 00:16:01.206 "write_zeroes": true, 00:16:01.206 "zcopy": true, 00:16:01.206 "get_zone_info": false, 00:16:01.206 "zone_management": false, 00:16:01.206 "zone_append": false, 00:16:01.206 "compare": false, 00:16:01.206 "compare_and_write": false, 00:16:01.206 "abort": true, 00:16:01.206 "seek_hole": false, 00:16:01.206 "seek_data": false, 00:16:01.206 "copy": true, 00:16:01.206 "nvme_iov_md": false 00:16:01.206 }, 00:16:01.206 "memory_domains": [ 00:16:01.206 { 00:16:01.206 "dma_device_id": "system", 00:16:01.206 "dma_device_type": 1 00:16:01.206 }, 00:16:01.206 { 00:16:01.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.206 "dma_device_type": 2 00:16:01.206 } 00:16:01.206 ], 00:16:01.206 "driver_specific": {} 00:16:01.206 }' 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:01.206 10:23:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:01.466 "name": "BaseBdev2", 00:16:01.466 "aliases": [ 00:16:01.466 "cf0503cd-619a-4ae3-9ea5-fa17de01991b" 00:16:01.466 ], 00:16:01.466 "product_name": "Malloc disk", 00:16:01.466 "block_size": 512, 00:16:01.466 "num_blocks": 65536, 00:16:01.466 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:16:01.466 "assigned_rate_limits": { 00:16:01.466 "rw_ios_per_sec": 0, 00:16:01.466 "rw_mbytes_per_sec": 0, 00:16:01.466 "r_mbytes_per_sec": 0, 00:16:01.466 "w_mbytes_per_sec": 0 00:16:01.466 }, 00:16:01.466 "claimed": true, 00:16:01.466 "claim_type": "exclusive_write", 00:16:01.466 "zoned": false, 00:16:01.466 "supported_io_types": { 00:16:01.466 "read": true, 00:16:01.466 "write": true, 00:16:01.466 "unmap": true, 00:16:01.466 "flush": true, 00:16:01.466 "reset": true, 00:16:01.466 "nvme_admin": false, 00:16:01.466 "nvme_io": false, 00:16:01.466 "nvme_io_md": false, 00:16:01.466 "write_zeroes": true, 00:16:01.466 "zcopy": true, 00:16:01.466 "get_zone_info": false, 00:16:01.466 "zone_management": false, 00:16:01.466 "zone_append": false, 00:16:01.466 "compare": false, 00:16:01.466 "compare_and_write": false, 00:16:01.466 "abort": true, 00:16:01.466 "seek_hole": false, 00:16:01.466 "seek_data": false, 00:16:01.466 "copy": true, 00:16:01.466 "nvme_iov_md": false 00:16:01.466 }, 00:16:01.466 "memory_domains": [ 00:16:01.466 { 00:16:01.466 "dma_device_id": "system", 00:16:01.466 "dma_device_type": 1 00:16:01.466 }, 00:16:01.466 { 00:16:01.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.466 "dma_device_type": 2 00:16:01.466 } 00:16:01.466 ], 00:16:01.466 "driver_specific": {} 00:16:01.466 }' 00:16:01.466 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.725 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.726 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:01.985 "name": "BaseBdev3", 00:16:01.985 "aliases": [ 00:16:01.985 "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091" 00:16:01.985 ], 00:16:01.985 "product_name": "Malloc disk", 00:16:01.985 "block_size": 512, 00:16:01.985 "num_blocks": 65536, 00:16:01.985 "uuid": "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091", 00:16:01.985 "assigned_rate_limits": { 00:16:01.985 "rw_ios_per_sec": 0, 00:16:01.985 "rw_mbytes_per_sec": 0, 00:16:01.985 "r_mbytes_per_sec": 0, 00:16:01.985 "w_mbytes_per_sec": 0 00:16:01.985 }, 00:16:01.985 "claimed": true, 00:16:01.985 "claim_type": "exclusive_write", 00:16:01.985 "zoned": false, 00:16:01.985 "supported_io_types": { 00:16:01.985 "read": true, 00:16:01.985 "write": true, 00:16:01.985 "unmap": true, 00:16:01.985 "flush": true, 00:16:01.985 "reset": true, 00:16:01.985 "nvme_admin": false, 00:16:01.985 "nvme_io": false, 00:16:01.985 "nvme_io_md": false, 00:16:01.985 "write_zeroes": true, 00:16:01.985 "zcopy": true, 00:16:01.985 "get_zone_info": false, 00:16:01.985 "zone_management": false, 00:16:01.985 "zone_append": false, 00:16:01.985 "compare": false, 00:16:01.985 "compare_and_write": false, 00:16:01.985 "abort": true, 00:16:01.985 "seek_hole": false, 00:16:01.985 "seek_data": false, 00:16:01.985 "copy": true, 00:16:01.985 "nvme_iov_md": false 00:16:01.985 }, 00:16:01.985 "memory_domains": [ 00:16:01.985 { 00:16:01.985 "dma_device_id": "system", 00:16:01.985 "dma_device_type": 1 00:16:01.985 }, 00:16:01.985 { 00:16:01.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.985 "dma_device_type": 2 00:16:01.985 } 00:16:01.985 ], 00:16:01.985 "driver_specific": {} 00:16:01.985 }' 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:01.985 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.245 10:23:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.245 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.245 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.245 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:02.245 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.504 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.504 "name": "BaseBdev4", 00:16:02.504 "aliases": [ 00:16:02.504 "7058bd2f-83b3-4997-bde8-05796843f6fe" 00:16:02.504 ], 00:16:02.504 "product_name": "Malloc disk", 00:16:02.504 "block_size": 512, 00:16:02.504 "num_blocks": 65536, 00:16:02.504 "uuid": "7058bd2f-83b3-4997-bde8-05796843f6fe", 00:16:02.504 "assigned_rate_limits": { 00:16:02.504 "rw_ios_per_sec": 0, 00:16:02.504 "rw_mbytes_per_sec": 0, 00:16:02.504 "r_mbytes_per_sec": 0, 00:16:02.504 "w_mbytes_per_sec": 0 00:16:02.504 }, 00:16:02.504 "claimed": true, 00:16:02.504 "claim_type": "exclusive_write", 00:16:02.504 "zoned": false, 00:16:02.504 "supported_io_types": { 00:16:02.504 "read": true, 00:16:02.504 "write": true, 00:16:02.504 "unmap": true, 00:16:02.504 "flush": true, 00:16:02.504 "reset": true, 00:16:02.504 "nvme_admin": false, 00:16:02.504 "nvme_io": false, 00:16:02.504 "nvme_io_md": false, 00:16:02.504 "write_zeroes": true, 00:16:02.504 "zcopy": true, 00:16:02.504 "get_zone_info": false, 00:16:02.504 "zone_management": false, 00:16:02.504 "zone_append": false, 00:16:02.504 "compare": false, 00:16:02.504 "compare_and_write": false, 00:16:02.504 "abort": true, 00:16:02.504 "seek_hole": false, 00:16:02.504 "seek_data": false, 00:16:02.504 "copy": true, 00:16:02.504 "nvme_iov_md": false 00:16:02.504 }, 00:16:02.504 "memory_domains": [ 00:16:02.504 { 00:16:02.504 "dma_device_id": "system", 00:16:02.504 "dma_device_type": 1 00:16:02.504 }, 00:16:02.504 { 00:16:02.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.504 "dma_device_type": 2 00:16:02.504 } 00:16:02.504 ], 00:16:02.504 "driver_specific": {} 00:16:02.504 }' 00:16:02.504 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.504 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.504 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.504 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.762 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.020 [2024-07-15 10:23:27.643315] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.020 [2024-07-15 10:23:27.643337] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.020 [2024-07-15 10:23:27.643373] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.020 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.277 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.277 "name": "Existed_Raid", 00:16:03.277 "uuid": "56fb1eae-7d6e-4aa7-abc1-c099006fe79a", 00:16:03.277 "strip_size_kb": 64, 00:16:03.277 "state": "offline", 00:16:03.277 "raid_level": "concat", 00:16:03.277 "superblock": false, 00:16:03.277 "num_base_bdevs": 4, 00:16:03.277 "num_base_bdevs_discovered": 3, 00:16:03.277 "num_base_bdevs_operational": 3, 00:16:03.277 "base_bdevs_list": [ 00:16:03.277 { 00:16:03.277 "name": null, 00:16:03.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.277 "is_configured": false, 00:16:03.277 "data_offset": 0, 00:16:03.277 "data_size": 65536 00:16:03.277 }, 00:16:03.277 { 00:16:03.277 "name": "BaseBdev2", 00:16:03.278 "uuid": "cf0503cd-619a-4ae3-9ea5-fa17de01991b", 00:16:03.278 "is_configured": true, 00:16:03.278 "data_offset": 0, 00:16:03.278 "data_size": 65536 00:16:03.278 }, 00:16:03.278 { 00:16:03.278 "name": "BaseBdev3", 00:16:03.278 "uuid": "6d91bf5d-4093-4fb4-b7a4-94f2e9cb7091", 00:16:03.278 "is_configured": true, 00:16:03.278 "data_offset": 0, 00:16:03.278 "data_size": 65536 00:16:03.278 }, 00:16:03.278 { 00:16:03.278 "name": "BaseBdev4", 00:16:03.278 "uuid": "7058bd2f-83b3-4997-bde8-05796843f6fe", 00:16:03.278 "is_configured": true, 00:16:03.278 "data_offset": 0, 00:16:03.278 "data_size": 65536 00:16:03.278 } 00:16:03.278 ] 00:16:03.278 }' 00:16:03.278 10:23:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.278 10:23:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.536 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:03.536 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:03.536 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.536 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:03.802 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:03.802 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:03.802 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:04.060 [2024-07-15 10:23:28.638696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.060 10:23:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:04.318 [2024-07-15 10:23:28.993339] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:04.318 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:04.318 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.318 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.318 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:04.576 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:04.576 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.576 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:04.576 [2024-07-15 10:23:29.343895] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:04.577 [2024-07-15 10:23:29.343935] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14ca830 name Existed_Raid, state offline 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:04.834 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.092 BaseBdev2 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.092 10:23:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.350 [ 00:16:05.350 { 00:16:05.350 "name": "BaseBdev2", 00:16:05.350 "aliases": [ 00:16:05.350 "1c24f4aa-d045-4575-8113-a71a4ebea2ea" 00:16:05.350 ], 00:16:05.350 "product_name": "Malloc disk", 00:16:05.350 "block_size": 512, 00:16:05.350 "num_blocks": 65536, 00:16:05.350 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:05.350 "assigned_rate_limits": { 00:16:05.350 "rw_ios_per_sec": 0, 00:16:05.350 "rw_mbytes_per_sec": 0, 00:16:05.350 "r_mbytes_per_sec": 0, 00:16:05.350 "w_mbytes_per_sec": 0 00:16:05.350 }, 00:16:05.350 "claimed": false, 00:16:05.350 "zoned": false, 00:16:05.350 "supported_io_types": { 00:16:05.350 "read": true, 00:16:05.350 "write": true, 00:16:05.350 "unmap": true, 00:16:05.350 "flush": true, 00:16:05.350 "reset": true, 00:16:05.350 "nvme_admin": false, 00:16:05.350 "nvme_io": false, 00:16:05.350 "nvme_io_md": false, 00:16:05.350 "write_zeroes": true, 00:16:05.350 "zcopy": true, 00:16:05.350 "get_zone_info": false, 00:16:05.350 "zone_management": false, 00:16:05.350 "zone_append": false, 00:16:05.350 "compare": false, 00:16:05.350 "compare_and_write": false, 00:16:05.350 "abort": true, 00:16:05.350 "seek_hole": false, 00:16:05.350 "seek_data": false, 00:16:05.350 "copy": true, 00:16:05.350 "nvme_iov_md": false 00:16:05.350 }, 00:16:05.350 "memory_domains": [ 00:16:05.350 { 00:16:05.350 "dma_device_id": "system", 00:16:05.350 "dma_device_type": 1 00:16:05.350 }, 00:16:05.350 { 00:16:05.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.350 "dma_device_type": 2 00:16:05.350 } 00:16:05.350 ], 00:16:05.350 "driver_specific": {} 00:16:05.350 } 00:16:05.350 ] 00:16:05.350 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:05.350 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:05.350 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:05.350 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.609 BaseBdev3 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.609 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.868 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.868 [ 00:16:05.868 { 00:16:05.868 "name": "BaseBdev3", 00:16:05.868 "aliases": [ 00:16:05.868 "2a807f32-1add-4ec6-8135-d1460c256868" 00:16:05.868 ], 00:16:05.868 "product_name": "Malloc disk", 00:16:05.868 "block_size": 512, 00:16:05.868 "num_blocks": 65536, 00:16:05.868 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:05.868 "assigned_rate_limits": { 00:16:05.868 "rw_ios_per_sec": 0, 00:16:05.868 "rw_mbytes_per_sec": 0, 00:16:05.868 "r_mbytes_per_sec": 0, 00:16:05.868 "w_mbytes_per_sec": 0 00:16:05.868 }, 00:16:05.868 "claimed": false, 00:16:05.868 "zoned": false, 00:16:05.868 "supported_io_types": { 00:16:05.868 "read": true, 00:16:05.868 "write": true, 00:16:05.868 "unmap": true, 00:16:05.868 "flush": true, 00:16:05.868 "reset": true, 00:16:05.868 "nvme_admin": false, 00:16:05.868 "nvme_io": false, 00:16:05.868 "nvme_io_md": false, 00:16:05.868 "write_zeroes": true, 00:16:05.868 "zcopy": true, 00:16:05.868 "get_zone_info": false, 00:16:05.868 "zone_management": false, 00:16:05.868 "zone_append": false, 00:16:05.868 "compare": false, 00:16:05.868 "compare_and_write": false, 00:16:05.868 "abort": true, 00:16:05.868 "seek_hole": false, 00:16:05.868 "seek_data": false, 00:16:05.868 "copy": true, 00:16:05.868 "nvme_iov_md": false 00:16:05.868 }, 00:16:05.868 "memory_domains": [ 00:16:05.868 { 00:16:05.868 "dma_device_id": "system", 00:16:05.868 "dma_device_type": 1 00:16:05.868 }, 00:16:05.868 { 00:16:05.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.868 "dma_device_type": 2 00:16:05.868 } 00:16:05.868 ], 00:16:05.868 "driver_specific": {} 00:16:05.868 } 00:16:05.868 ] 00:16:05.868 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:05.868 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:05.868 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:05.868 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.127 BaseBdev4 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.127 10:23:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.386 [ 00:16:06.386 { 00:16:06.386 "name": "BaseBdev4", 00:16:06.386 "aliases": [ 00:16:06.386 "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d" 00:16:06.386 ], 00:16:06.386 "product_name": "Malloc disk", 00:16:06.386 "block_size": 512, 00:16:06.386 "num_blocks": 65536, 00:16:06.386 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:06.386 "assigned_rate_limits": { 00:16:06.386 "rw_ios_per_sec": 0, 00:16:06.386 "rw_mbytes_per_sec": 0, 00:16:06.386 "r_mbytes_per_sec": 0, 00:16:06.386 "w_mbytes_per_sec": 0 00:16:06.386 }, 00:16:06.386 "claimed": false, 00:16:06.386 "zoned": false, 00:16:06.386 "supported_io_types": { 00:16:06.386 "read": true, 00:16:06.386 "write": true, 00:16:06.386 "unmap": true, 00:16:06.386 "flush": true, 00:16:06.386 "reset": true, 00:16:06.386 "nvme_admin": false, 00:16:06.386 "nvme_io": false, 00:16:06.386 "nvme_io_md": false, 00:16:06.386 "write_zeroes": true, 00:16:06.386 "zcopy": true, 00:16:06.386 "get_zone_info": false, 00:16:06.386 "zone_management": false, 00:16:06.386 "zone_append": false, 00:16:06.386 "compare": false, 00:16:06.386 "compare_and_write": false, 00:16:06.386 "abort": true, 00:16:06.386 "seek_hole": false, 00:16:06.386 "seek_data": false, 00:16:06.386 "copy": true, 00:16:06.386 "nvme_iov_md": false 00:16:06.386 }, 00:16:06.386 "memory_domains": [ 00:16:06.386 { 00:16:06.386 "dma_device_id": "system", 00:16:06.386 "dma_device_type": 1 00:16:06.386 }, 00:16:06.386 { 00:16:06.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.386 "dma_device_type": 2 00:16:06.386 } 00:16:06.386 ], 00:16:06.386 "driver_specific": {} 00:16:06.386 } 00:16:06.386 ] 00:16:06.386 10:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:06.386 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:06.386 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:06.386 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:06.645 [2024-07-15 10:23:31.185609] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.645 [2024-07-15 10:23:31.185641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.645 [2024-07-15 10:23:31.185653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.645 [2024-07-15 10:23:31.186513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.645 [2024-07-15 10:23:31.186542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.645 "name": "Existed_Raid", 00:16:06.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.645 "strip_size_kb": 64, 00:16:06.645 "state": "configuring", 00:16:06.645 "raid_level": "concat", 00:16:06.645 "superblock": false, 00:16:06.645 "num_base_bdevs": 4, 00:16:06.645 "num_base_bdevs_discovered": 3, 00:16:06.645 "num_base_bdevs_operational": 4, 00:16:06.645 "base_bdevs_list": [ 00:16:06.645 { 00:16:06.645 "name": "BaseBdev1", 00:16:06.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.645 "is_configured": false, 00:16:06.645 "data_offset": 0, 00:16:06.645 "data_size": 0 00:16:06.645 }, 00:16:06.645 { 00:16:06.645 "name": "BaseBdev2", 00:16:06.645 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:06.645 "is_configured": true, 00:16:06.645 "data_offset": 0, 00:16:06.645 "data_size": 65536 00:16:06.645 }, 00:16:06.645 { 00:16:06.645 "name": "BaseBdev3", 00:16:06.645 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:06.645 "is_configured": true, 00:16:06.645 "data_offset": 0, 00:16:06.645 "data_size": 65536 00:16:06.645 }, 00:16:06.645 { 00:16:06.645 "name": "BaseBdev4", 00:16:06.645 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:06.645 "is_configured": true, 00:16:06.645 "data_offset": 0, 00:16:06.645 "data_size": 65536 00:16:06.645 } 00:16:06.645 ] 00:16:06.645 }' 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.645 10:23:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.213 10:23:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:07.472 [2024-07-15 10:23:32.019749] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.472 "name": "Existed_Raid", 00:16:07.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.472 "strip_size_kb": 64, 00:16:07.472 "state": "configuring", 00:16:07.472 "raid_level": "concat", 00:16:07.472 "superblock": false, 00:16:07.472 "num_base_bdevs": 4, 00:16:07.472 "num_base_bdevs_discovered": 2, 00:16:07.472 "num_base_bdevs_operational": 4, 00:16:07.472 "base_bdevs_list": [ 00:16:07.472 { 00:16:07.472 "name": "BaseBdev1", 00:16:07.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.472 "is_configured": false, 00:16:07.472 "data_offset": 0, 00:16:07.472 "data_size": 0 00:16:07.472 }, 00:16:07.472 { 00:16:07.472 "name": null, 00:16:07.472 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:07.472 "is_configured": false, 00:16:07.472 "data_offset": 0, 00:16:07.472 "data_size": 65536 00:16:07.472 }, 00:16:07.472 { 00:16:07.472 "name": "BaseBdev3", 00:16:07.472 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:07.472 "is_configured": true, 00:16:07.472 "data_offset": 0, 00:16:07.472 "data_size": 65536 00:16:07.472 }, 00:16:07.472 { 00:16:07.472 "name": "BaseBdev4", 00:16:07.472 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:07.472 "is_configured": true, 00:16:07.472 "data_offset": 0, 00:16:07.472 "data_size": 65536 00:16:07.472 } 00:16:07.472 ] 00:16:07.472 }' 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.472 10:23:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.039 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.039 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:08.297 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:08.297 10:23:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.297 [2024-07-15 10:23:33.029040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.297 BaseBdev1 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.297 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.556 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:08.814 [ 00:16:08.814 { 00:16:08.814 "name": "BaseBdev1", 00:16:08.814 "aliases": [ 00:16:08.814 "0a6819c4-3f2c-4c39-9556-bde78689c573" 00:16:08.814 ], 00:16:08.814 "product_name": "Malloc disk", 00:16:08.814 "block_size": 512, 00:16:08.814 "num_blocks": 65536, 00:16:08.814 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:08.814 "assigned_rate_limits": { 00:16:08.814 "rw_ios_per_sec": 0, 00:16:08.814 "rw_mbytes_per_sec": 0, 00:16:08.814 "r_mbytes_per_sec": 0, 00:16:08.814 "w_mbytes_per_sec": 0 00:16:08.814 }, 00:16:08.814 "claimed": true, 00:16:08.814 "claim_type": "exclusive_write", 00:16:08.814 "zoned": false, 00:16:08.814 "supported_io_types": { 00:16:08.814 "read": true, 00:16:08.814 "write": true, 00:16:08.814 "unmap": true, 00:16:08.814 "flush": true, 00:16:08.814 "reset": true, 00:16:08.814 "nvme_admin": false, 00:16:08.814 "nvme_io": false, 00:16:08.814 "nvme_io_md": false, 00:16:08.814 "write_zeroes": true, 00:16:08.814 "zcopy": true, 00:16:08.814 "get_zone_info": false, 00:16:08.814 "zone_management": false, 00:16:08.814 "zone_append": false, 00:16:08.814 "compare": false, 00:16:08.814 "compare_and_write": false, 00:16:08.814 "abort": true, 00:16:08.814 "seek_hole": false, 00:16:08.814 "seek_data": false, 00:16:08.814 "copy": true, 00:16:08.814 "nvme_iov_md": false 00:16:08.814 }, 00:16:08.814 "memory_domains": [ 00:16:08.814 { 00:16:08.814 "dma_device_id": "system", 00:16:08.814 "dma_device_type": 1 00:16:08.814 }, 00:16:08.814 { 00:16:08.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.814 "dma_device_type": 2 00:16:08.815 } 00:16:08.815 ], 00:16:08.815 "driver_specific": {} 00:16:08.815 } 00:16:08.815 ] 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.815 "name": "Existed_Raid", 00:16:08.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.815 "strip_size_kb": 64, 00:16:08.815 "state": "configuring", 00:16:08.815 "raid_level": "concat", 00:16:08.815 "superblock": false, 00:16:08.815 "num_base_bdevs": 4, 00:16:08.815 "num_base_bdevs_discovered": 3, 00:16:08.815 "num_base_bdevs_operational": 4, 00:16:08.815 "base_bdevs_list": [ 00:16:08.815 { 00:16:08.815 "name": "BaseBdev1", 00:16:08.815 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:08.815 "is_configured": true, 00:16:08.815 "data_offset": 0, 00:16:08.815 "data_size": 65536 00:16:08.815 }, 00:16:08.815 { 00:16:08.815 "name": null, 00:16:08.815 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:08.815 "is_configured": false, 00:16:08.815 "data_offset": 0, 00:16:08.815 "data_size": 65536 00:16:08.815 }, 00:16:08.815 { 00:16:08.815 "name": "BaseBdev3", 00:16:08.815 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:08.815 "is_configured": true, 00:16:08.815 "data_offset": 0, 00:16:08.815 "data_size": 65536 00:16:08.815 }, 00:16:08.815 { 00:16:08.815 "name": "BaseBdev4", 00:16:08.815 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:08.815 "is_configured": true, 00:16:08.815 "data_offset": 0, 00:16:08.815 "data_size": 65536 00:16:08.815 } 00:16:08.815 ] 00:16:08.815 }' 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.815 10:23:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.380 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.380 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:09.639 [2024-07-15 10:23:34.388558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.639 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.897 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.897 "name": "Existed_Raid", 00:16:09.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.897 "strip_size_kb": 64, 00:16:09.897 "state": "configuring", 00:16:09.897 "raid_level": "concat", 00:16:09.897 "superblock": false, 00:16:09.897 "num_base_bdevs": 4, 00:16:09.897 "num_base_bdevs_discovered": 2, 00:16:09.897 "num_base_bdevs_operational": 4, 00:16:09.897 "base_bdevs_list": [ 00:16:09.897 { 00:16:09.897 "name": "BaseBdev1", 00:16:09.897 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:09.897 "is_configured": true, 00:16:09.897 "data_offset": 0, 00:16:09.897 "data_size": 65536 00:16:09.897 }, 00:16:09.897 { 00:16:09.897 "name": null, 00:16:09.897 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:09.897 "is_configured": false, 00:16:09.897 "data_offset": 0, 00:16:09.897 "data_size": 65536 00:16:09.897 }, 00:16:09.898 { 00:16:09.898 "name": null, 00:16:09.898 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:09.898 "is_configured": false, 00:16:09.898 "data_offset": 0, 00:16:09.898 "data_size": 65536 00:16:09.898 }, 00:16:09.898 { 00:16:09.898 "name": "BaseBdev4", 00:16:09.898 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:09.898 "is_configured": true, 00:16:09.898 "data_offset": 0, 00:16:09.898 "data_size": 65536 00:16:09.898 } 00:16:09.898 ] 00:16:09.898 }' 00:16:09.898 10:23:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.898 10:23:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.465 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.465 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.465 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:10.465 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:10.724 [2024-07-15 10:23:35.375154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.724 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.983 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.983 "name": "Existed_Raid", 00:16:10.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.983 "strip_size_kb": 64, 00:16:10.983 "state": "configuring", 00:16:10.983 "raid_level": "concat", 00:16:10.983 "superblock": false, 00:16:10.983 "num_base_bdevs": 4, 00:16:10.983 "num_base_bdevs_discovered": 3, 00:16:10.983 "num_base_bdevs_operational": 4, 00:16:10.983 "base_bdevs_list": [ 00:16:10.983 { 00:16:10.983 "name": "BaseBdev1", 00:16:10.983 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:10.983 "is_configured": true, 00:16:10.983 "data_offset": 0, 00:16:10.983 "data_size": 65536 00:16:10.983 }, 00:16:10.983 { 00:16:10.983 "name": null, 00:16:10.983 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:10.983 "is_configured": false, 00:16:10.983 "data_offset": 0, 00:16:10.983 "data_size": 65536 00:16:10.983 }, 00:16:10.983 { 00:16:10.983 "name": "BaseBdev3", 00:16:10.983 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:10.983 "is_configured": true, 00:16:10.983 "data_offset": 0, 00:16:10.983 "data_size": 65536 00:16:10.983 }, 00:16:10.983 { 00:16:10.983 "name": "BaseBdev4", 00:16:10.983 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:10.983 "is_configured": true, 00:16:10.983 "data_offset": 0, 00:16:10.983 "data_size": 65536 00:16:10.983 } 00:16:10.983 ] 00:16:10.983 }' 00:16:10.983 10:23:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.983 10:23:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.551 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.551 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.551 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:11.551 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:11.810 [2024-07-15 10:23:36.357692] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.810 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.810 "name": "Existed_Raid", 00:16:11.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.810 "strip_size_kb": 64, 00:16:11.810 "state": "configuring", 00:16:11.810 "raid_level": "concat", 00:16:11.810 "superblock": false, 00:16:11.810 "num_base_bdevs": 4, 00:16:11.810 "num_base_bdevs_discovered": 2, 00:16:11.810 "num_base_bdevs_operational": 4, 00:16:11.810 "base_bdevs_list": [ 00:16:11.810 { 00:16:11.810 "name": null, 00:16:11.810 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:11.810 "is_configured": false, 00:16:11.810 "data_offset": 0, 00:16:11.810 "data_size": 65536 00:16:11.810 }, 00:16:11.810 { 00:16:11.810 "name": null, 00:16:11.811 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:11.811 "is_configured": false, 00:16:11.811 "data_offset": 0, 00:16:11.811 "data_size": 65536 00:16:11.811 }, 00:16:11.811 { 00:16:11.811 "name": "BaseBdev3", 00:16:11.811 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:11.811 "is_configured": true, 00:16:11.811 "data_offset": 0, 00:16:11.811 "data_size": 65536 00:16:11.811 }, 00:16:11.811 { 00:16:11.811 "name": "BaseBdev4", 00:16:11.811 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:11.811 "is_configured": true, 00:16:11.811 "data_offset": 0, 00:16:11.811 "data_size": 65536 00:16:11.811 } 00:16:11.811 ] 00:16:11.811 }' 00:16:11.811 10:23:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.811 10:23:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.378 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:12.637 [2024-07-15 10:23:37.325909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.637 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.895 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.896 "name": "Existed_Raid", 00:16:12.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.896 "strip_size_kb": 64, 00:16:12.896 "state": "configuring", 00:16:12.896 "raid_level": "concat", 00:16:12.896 "superblock": false, 00:16:12.896 "num_base_bdevs": 4, 00:16:12.896 "num_base_bdevs_discovered": 3, 00:16:12.896 "num_base_bdevs_operational": 4, 00:16:12.896 "base_bdevs_list": [ 00:16:12.896 { 00:16:12.896 "name": null, 00:16:12.896 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:12.896 "is_configured": false, 00:16:12.896 "data_offset": 0, 00:16:12.896 "data_size": 65536 00:16:12.896 }, 00:16:12.896 { 00:16:12.896 "name": "BaseBdev2", 00:16:12.896 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:12.896 "is_configured": true, 00:16:12.896 "data_offset": 0, 00:16:12.896 "data_size": 65536 00:16:12.896 }, 00:16:12.896 { 00:16:12.896 "name": "BaseBdev3", 00:16:12.896 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:12.896 "is_configured": true, 00:16:12.896 "data_offset": 0, 00:16:12.896 "data_size": 65536 00:16:12.896 }, 00:16:12.896 { 00:16:12.896 "name": "BaseBdev4", 00:16:12.896 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:12.896 "is_configured": true, 00:16:12.896 "data_offset": 0, 00:16:12.896 "data_size": 65536 00:16:12.896 } 00:16:12.896 ] 00:16:12.896 }' 00:16:12.896 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.896 10:23:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.464 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.464 10:23:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:13.464 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:13.464 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.464 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0a6819c4-3f2c-4c39-9556-bde78689c573 00:16:13.723 [2024-07-15 10:23:38.487743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:13.723 [2024-07-15 10:23:38.487773] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x14c06f0 00:16:13.723 [2024-07-15 10:23:38.487778] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:13.723 [2024-07-15 10:23:38.487920] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14cc3d0 00:16:13.723 [2024-07-15 10:23:38.488022] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14c06f0 00:16:13.723 [2024-07-15 10:23:38.488028] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14c06f0 00:16:13.723 [2024-07-15 10:23:38.488149] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.723 NewBaseBdev 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.723 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.025 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:14.284 [ 00:16:14.284 { 00:16:14.284 "name": "NewBaseBdev", 00:16:14.284 "aliases": [ 00:16:14.284 "0a6819c4-3f2c-4c39-9556-bde78689c573" 00:16:14.284 ], 00:16:14.284 "product_name": "Malloc disk", 00:16:14.284 "block_size": 512, 00:16:14.284 "num_blocks": 65536, 00:16:14.284 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:14.284 "assigned_rate_limits": { 00:16:14.284 "rw_ios_per_sec": 0, 00:16:14.284 "rw_mbytes_per_sec": 0, 00:16:14.284 "r_mbytes_per_sec": 0, 00:16:14.284 "w_mbytes_per_sec": 0 00:16:14.284 }, 00:16:14.284 "claimed": true, 00:16:14.284 "claim_type": "exclusive_write", 00:16:14.284 "zoned": false, 00:16:14.284 "supported_io_types": { 00:16:14.284 "read": true, 00:16:14.284 "write": true, 00:16:14.284 "unmap": true, 00:16:14.284 "flush": true, 00:16:14.284 "reset": true, 00:16:14.284 "nvme_admin": false, 00:16:14.284 "nvme_io": false, 00:16:14.284 "nvme_io_md": false, 00:16:14.284 "write_zeroes": true, 00:16:14.284 "zcopy": true, 00:16:14.284 "get_zone_info": false, 00:16:14.284 "zone_management": false, 00:16:14.284 "zone_append": false, 00:16:14.284 "compare": false, 00:16:14.284 "compare_and_write": false, 00:16:14.284 "abort": true, 00:16:14.284 "seek_hole": false, 00:16:14.284 "seek_data": false, 00:16:14.284 "copy": true, 00:16:14.284 "nvme_iov_md": false 00:16:14.284 }, 00:16:14.284 "memory_domains": [ 00:16:14.284 { 00:16:14.285 "dma_device_id": "system", 00:16:14.285 "dma_device_type": 1 00:16:14.285 }, 00:16:14.285 { 00:16:14.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.285 "dma_device_type": 2 00:16:14.285 } 00:16:14.285 ], 00:16:14.285 "driver_specific": {} 00:16:14.285 } 00:16:14.285 ] 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.285 "name": "Existed_Raid", 00:16:14.285 "uuid": "8f63a6ab-d389-4e51-a52d-856046338f93", 00:16:14.285 "strip_size_kb": 64, 00:16:14.285 "state": "online", 00:16:14.285 "raid_level": "concat", 00:16:14.285 "superblock": false, 00:16:14.285 "num_base_bdevs": 4, 00:16:14.285 "num_base_bdevs_discovered": 4, 00:16:14.285 "num_base_bdevs_operational": 4, 00:16:14.285 "base_bdevs_list": [ 00:16:14.285 { 00:16:14.285 "name": "NewBaseBdev", 00:16:14.285 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:14.285 "is_configured": true, 00:16:14.285 "data_offset": 0, 00:16:14.285 "data_size": 65536 00:16:14.285 }, 00:16:14.285 { 00:16:14.285 "name": "BaseBdev2", 00:16:14.285 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:14.285 "is_configured": true, 00:16:14.285 "data_offset": 0, 00:16:14.285 "data_size": 65536 00:16:14.285 }, 00:16:14.285 { 00:16:14.285 "name": "BaseBdev3", 00:16:14.285 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:14.285 "is_configured": true, 00:16:14.285 "data_offset": 0, 00:16:14.285 "data_size": 65536 00:16:14.285 }, 00:16:14.285 { 00:16:14.285 "name": "BaseBdev4", 00:16:14.285 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:14.285 "is_configured": true, 00:16:14.285 "data_offset": 0, 00:16:14.285 "data_size": 65536 00:16:14.285 } 00:16:14.285 ] 00:16:14.285 }' 00:16:14.285 10:23:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.285 10:23:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:14.853 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:15.112 [2024-07-15 10:23:39.654993] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.112 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:15.112 "name": "Existed_Raid", 00:16:15.112 "aliases": [ 00:16:15.112 "8f63a6ab-d389-4e51-a52d-856046338f93" 00:16:15.112 ], 00:16:15.112 "product_name": "Raid Volume", 00:16:15.112 "block_size": 512, 00:16:15.112 "num_blocks": 262144, 00:16:15.112 "uuid": "8f63a6ab-d389-4e51-a52d-856046338f93", 00:16:15.112 "assigned_rate_limits": { 00:16:15.112 "rw_ios_per_sec": 0, 00:16:15.112 "rw_mbytes_per_sec": 0, 00:16:15.112 "r_mbytes_per_sec": 0, 00:16:15.112 "w_mbytes_per_sec": 0 00:16:15.112 }, 00:16:15.112 "claimed": false, 00:16:15.112 "zoned": false, 00:16:15.112 "supported_io_types": { 00:16:15.112 "read": true, 00:16:15.112 "write": true, 00:16:15.112 "unmap": true, 00:16:15.112 "flush": true, 00:16:15.112 "reset": true, 00:16:15.112 "nvme_admin": false, 00:16:15.112 "nvme_io": false, 00:16:15.112 "nvme_io_md": false, 00:16:15.112 "write_zeroes": true, 00:16:15.112 "zcopy": false, 00:16:15.112 "get_zone_info": false, 00:16:15.112 "zone_management": false, 00:16:15.112 "zone_append": false, 00:16:15.112 "compare": false, 00:16:15.112 "compare_and_write": false, 00:16:15.112 "abort": false, 00:16:15.112 "seek_hole": false, 00:16:15.112 "seek_data": false, 00:16:15.112 "copy": false, 00:16:15.112 "nvme_iov_md": false 00:16:15.112 }, 00:16:15.112 "memory_domains": [ 00:16:15.113 { 00:16:15.113 "dma_device_id": "system", 00:16:15.113 "dma_device_type": 1 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.113 "dma_device_type": 2 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "system", 00:16:15.113 "dma_device_type": 1 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.113 "dma_device_type": 2 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "system", 00:16:15.113 "dma_device_type": 1 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.113 "dma_device_type": 2 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "system", 00:16:15.113 "dma_device_type": 1 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.113 "dma_device_type": 2 00:16:15.113 } 00:16:15.113 ], 00:16:15.113 "driver_specific": { 00:16:15.113 "raid": { 00:16:15.113 "uuid": "8f63a6ab-d389-4e51-a52d-856046338f93", 00:16:15.113 "strip_size_kb": 64, 00:16:15.113 "state": "online", 00:16:15.113 "raid_level": "concat", 00:16:15.113 "superblock": false, 00:16:15.113 "num_base_bdevs": 4, 00:16:15.113 "num_base_bdevs_discovered": 4, 00:16:15.113 "num_base_bdevs_operational": 4, 00:16:15.113 "base_bdevs_list": [ 00:16:15.113 { 00:16:15.113 "name": "NewBaseBdev", 00:16:15.113 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:15.113 "is_configured": true, 00:16:15.113 "data_offset": 0, 00:16:15.113 "data_size": 65536 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "name": "BaseBdev2", 00:16:15.113 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:15.113 "is_configured": true, 00:16:15.113 "data_offset": 0, 00:16:15.113 "data_size": 65536 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "name": "BaseBdev3", 00:16:15.113 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:15.113 "is_configured": true, 00:16:15.113 "data_offset": 0, 00:16:15.113 "data_size": 65536 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "name": "BaseBdev4", 00:16:15.113 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:15.113 "is_configured": true, 00:16:15.113 "data_offset": 0, 00:16:15.113 "data_size": 65536 00:16:15.113 } 00:16:15.113 ] 00:16:15.113 } 00:16:15.113 } 00:16:15.113 }' 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:15.113 BaseBdev2 00:16:15.113 BaseBdev3 00:16:15.113 BaseBdev4' 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.113 "name": "NewBaseBdev", 00:16:15.113 "aliases": [ 00:16:15.113 "0a6819c4-3f2c-4c39-9556-bde78689c573" 00:16:15.113 ], 00:16:15.113 "product_name": "Malloc disk", 00:16:15.113 "block_size": 512, 00:16:15.113 "num_blocks": 65536, 00:16:15.113 "uuid": "0a6819c4-3f2c-4c39-9556-bde78689c573", 00:16:15.113 "assigned_rate_limits": { 00:16:15.113 "rw_ios_per_sec": 0, 00:16:15.113 "rw_mbytes_per_sec": 0, 00:16:15.113 "r_mbytes_per_sec": 0, 00:16:15.113 "w_mbytes_per_sec": 0 00:16:15.113 }, 00:16:15.113 "claimed": true, 00:16:15.113 "claim_type": "exclusive_write", 00:16:15.113 "zoned": false, 00:16:15.113 "supported_io_types": { 00:16:15.113 "read": true, 00:16:15.113 "write": true, 00:16:15.113 "unmap": true, 00:16:15.113 "flush": true, 00:16:15.113 "reset": true, 00:16:15.113 "nvme_admin": false, 00:16:15.113 "nvme_io": false, 00:16:15.113 "nvme_io_md": false, 00:16:15.113 "write_zeroes": true, 00:16:15.113 "zcopy": true, 00:16:15.113 "get_zone_info": false, 00:16:15.113 "zone_management": false, 00:16:15.113 "zone_append": false, 00:16:15.113 "compare": false, 00:16:15.113 "compare_and_write": false, 00:16:15.113 "abort": true, 00:16:15.113 "seek_hole": false, 00:16:15.113 "seek_data": false, 00:16:15.113 "copy": true, 00:16:15.113 "nvme_iov_md": false 00:16:15.113 }, 00:16:15.113 "memory_domains": [ 00:16:15.113 { 00:16:15.113 "dma_device_id": "system", 00:16:15.113 "dma_device_type": 1 00:16:15.113 }, 00:16:15.113 { 00:16:15.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.113 "dma_device_type": 2 00:16:15.113 } 00:16:15.113 ], 00:16:15.113 "driver_specific": {} 00:16:15.113 }' 00:16:15.113 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.372 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.372 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.372 10:23:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.372 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.372 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.372 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.372 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.372 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.372 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.631 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.631 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:15.631 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:15.631 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:15.631 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:15.631 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:15.631 "name": "BaseBdev2", 00:16:15.631 "aliases": [ 00:16:15.631 "1c24f4aa-d045-4575-8113-a71a4ebea2ea" 00:16:15.631 ], 00:16:15.631 "product_name": "Malloc disk", 00:16:15.631 "block_size": 512, 00:16:15.631 "num_blocks": 65536, 00:16:15.631 "uuid": "1c24f4aa-d045-4575-8113-a71a4ebea2ea", 00:16:15.631 "assigned_rate_limits": { 00:16:15.631 "rw_ios_per_sec": 0, 00:16:15.631 "rw_mbytes_per_sec": 0, 00:16:15.631 "r_mbytes_per_sec": 0, 00:16:15.631 "w_mbytes_per_sec": 0 00:16:15.631 }, 00:16:15.631 "claimed": true, 00:16:15.631 "claim_type": "exclusive_write", 00:16:15.632 "zoned": false, 00:16:15.632 "supported_io_types": { 00:16:15.632 "read": true, 00:16:15.632 "write": true, 00:16:15.632 "unmap": true, 00:16:15.632 "flush": true, 00:16:15.632 "reset": true, 00:16:15.632 "nvme_admin": false, 00:16:15.632 "nvme_io": false, 00:16:15.632 "nvme_io_md": false, 00:16:15.632 "write_zeroes": true, 00:16:15.632 "zcopy": true, 00:16:15.632 "get_zone_info": false, 00:16:15.632 "zone_management": false, 00:16:15.632 "zone_append": false, 00:16:15.632 "compare": false, 00:16:15.632 "compare_and_write": false, 00:16:15.632 "abort": true, 00:16:15.632 "seek_hole": false, 00:16:15.632 "seek_data": false, 00:16:15.632 "copy": true, 00:16:15.632 "nvme_iov_md": false 00:16:15.632 }, 00:16:15.632 "memory_domains": [ 00:16:15.632 { 00:16:15.632 "dma_device_id": "system", 00:16:15.632 "dma_device_type": 1 00:16:15.632 }, 00:16:15.632 { 00:16:15.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.632 "dma_device_type": 2 00:16:15.632 } 00:16:15.632 ], 00:16:15.632 "driver_specific": {} 00:16:15.632 }' 00:16:15.632 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:15.891 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.150 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.150 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.150 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:16.150 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.150 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.150 "name": "BaseBdev3", 00:16:16.150 "aliases": [ 00:16:16.150 "2a807f32-1add-4ec6-8135-d1460c256868" 00:16:16.150 ], 00:16:16.150 "product_name": "Malloc disk", 00:16:16.150 "block_size": 512, 00:16:16.150 "num_blocks": 65536, 00:16:16.150 "uuid": "2a807f32-1add-4ec6-8135-d1460c256868", 00:16:16.150 "assigned_rate_limits": { 00:16:16.150 "rw_ios_per_sec": 0, 00:16:16.150 "rw_mbytes_per_sec": 0, 00:16:16.150 "r_mbytes_per_sec": 0, 00:16:16.150 "w_mbytes_per_sec": 0 00:16:16.150 }, 00:16:16.150 "claimed": true, 00:16:16.150 "claim_type": "exclusive_write", 00:16:16.150 "zoned": false, 00:16:16.150 "supported_io_types": { 00:16:16.150 "read": true, 00:16:16.150 "write": true, 00:16:16.150 "unmap": true, 00:16:16.150 "flush": true, 00:16:16.150 "reset": true, 00:16:16.150 "nvme_admin": false, 00:16:16.150 "nvme_io": false, 00:16:16.150 "nvme_io_md": false, 00:16:16.150 "write_zeroes": true, 00:16:16.150 "zcopy": true, 00:16:16.150 "get_zone_info": false, 00:16:16.150 "zone_management": false, 00:16:16.150 "zone_append": false, 00:16:16.150 "compare": false, 00:16:16.150 "compare_and_write": false, 00:16:16.150 "abort": true, 00:16:16.150 "seek_hole": false, 00:16:16.150 "seek_data": false, 00:16:16.150 "copy": true, 00:16:16.150 "nvme_iov_md": false 00:16:16.150 }, 00:16:16.150 "memory_domains": [ 00:16:16.150 { 00:16:16.150 "dma_device_id": "system", 00:16:16.150 "dma_device_type": 1 00:16:16.150 }, 00:16:16.150 { 00:16:16.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.150 "dma_device_type": 2 00:16:16.150 } 00:16:16.150 ], 00:16:16.150 "driver_specific": {} 00:16:16.150 }' 00:16:16.150 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.151 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.151 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.410 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.410 10:23:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:16.410 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:16.669 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:16.669 "name": "BaseBdev4", 00:16:16.669 "aliases": [ 00:16:16.669 "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d" 00:16:16.669 ], 00:16:16.669 "product_name": "Malloc disk", 00:16:16.669 "block_size": 512, 00:16:16.669 "num_blocks": 65536, 00:16:16.669 "uuid": "bfaf7b86-763c-4e67-b4ff-98b3d9e3833d", 00:16:16.669 "assigned_rate_limits": { 00:16:16.669 "rw_ios_per_sec": 0, 00:16:16.669 "rw_mbytes_per_sec": 0, 00:16:16.669 "r_mbytes_per_sec": 0, 00:16:16.669 "w_mbytes_per_sec": 0 00:16:16.669 }, 00:16:16.669 "claimed": true, 00:16:16.669 "claim_type": "exclusive_write", 00:16:16.669 "zoned": false, 00:16:16.669 "supported_io_types": { 00:16:16.669 "read": true, 00:16:16.669 "write": true, 00:16:16.669 "unmap": true, 00:16:16.669 "flush": true, 00:16:16.669 "reset": true, 00:16:16.669 "nvme_admin": false, 00:16:16.669 "nvme_io": false, 00:16:16.669 "nvme_io_md": false, 00:16:16.669 "write_zeroes": true, 00:16:16.669 "zcopy": true, 00:16:16.669 "get_zone_info": false, 00:16:16.669 "zone_management": false, 00:16:16.669 "zone_append": false, 00:16:16.669 "compare": false, 00:16:16.669 "compare_and_write": false, 00:16:16.669 "abort": true, 00:16:16.669 "seek_hole": false, 00:16:16.669 "seek_data": false, 00:16:16.669 "copy": true, 00:16:16.669 "nvme_iov_md": false 00:16:16.669 }, 00:16:16.669 "memory_domains": [ 00:16:16.669 { 00:16:16.669 "dma_device_id": "system", 00:16:16.669 "dma_device_type": 1 00:16:16.669 }, 00:16:16.669 { 00:16:16.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.669 "dma_device_type": 2 00:16:16.669 } 00:16:16.670 ], 00:16:16.670 "driver_specific": {} 00:16:16.670 }' 00:16:16.670 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.670 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:16.670 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:16.670 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.670 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:16.929 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:17.188 [2024-07-15 10:23:41.768252] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.188 [2024-07-15 10:23:41.768275] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.188 [2024-07-15 10:23:41.768313] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.188 [2024-07-15 10:23:41.768353] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.188 [2024-07-15 10:23:41.768360] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14c06f0 name Existed_Raid, state offline 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1818087 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1818087 ']' 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1818087 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1818087 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1818087' 00:16:17.188 killing process with pid 1818087 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1818087 00:16:17.188 [2024-07-15 10:23:41.833543] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.188 10:23:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1818087 00:16:17.188 [2024-07-15 10:23:41.863831] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:17.447 00:16:17.447 real 0m24.425s 00:16:17.447 user 0m44.621s 00:16:17.447 sys 0m4.768s 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.447 ************************************ 00:16:17.447 END TEST raid_state_function_test 00:16:17.447 ************************************ 00:16:17.447 10:23:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:17.447 10:23:42 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:17.447 10:23:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:17.447 10:23:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.447 10:23:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.447 ************************************ 00:16:17.447 START TEST raid_state_function_test_sb 00:16:17.447 ************************************ 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1822993 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1822993' 00:16:17.447 Process raid pid: 1822993 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1822993 /var/tmp/spdk-raid.sock 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1822993 ']' 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.447 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.447 [2024-07-15 10:23:42.172460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:17.447 [2024-07-15 10:23:42.172502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:17.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.447 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:17.706 [2024-07-15 10:23:42.264737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.706 [2024-07-15 10:23:42.340796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.706 [2024-07-15 10:23:42.393702] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.706 [2024-07-15 10:23:42.393728] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.274 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.274 10:23:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:18.274 10:23:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:18.533 [2024-07-15 10:23:43.124138] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.533 [2024-07-15 10:23:43.124167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.533 [2024-07-15 10:23:43.124174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.533 [2024-07-15 10:23:43.124182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.533 [2024-07-15 10:23:43.124187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.533 [2024-07-15 10:23:43.124194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.533 [2024-07-15 10:23:43.124199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.533 [2024-07-15 10:23:43.124206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.533 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.533 "name": "Existed_Raid", 00:16:18.533 "uuid": "7d15c88e-a82d-4ae9-931a-9258699ebf5c", 00:16:18.533 "strip_size_kb": 64, 00:16:18.533 "state": "configuring", 00:16:18.533 "raid_level": "concat", 00:16:18.533 "superblock": true, 00:16:18.533 "num_base_bdevs": 4, 00:16:18.533 "num_base_bdevs_discovered": 0, 00:16:18.533 "num_base_bdevs_operational": 4, 00:16:18.533 "base_bdevs_list": [ 00:16:18.533 { 00:16:18.533 "name": "BaseBdev1", 00:16:18.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.533 "is_configured": false, 00:16:18.533 "data_offset": 0, 00:16:18.533 "data_size": 0 00:16:18.533 }, 00:16:18.533 { 00:16:18.533 "name": "BaseBdev2", 00:16:18.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.533 "is_configured": false, 00:16:18.533 "data_offset": 0, 00:16:18.533 "data_size": 0 00:16:18.533 }, 00:16:18.533 { 00:16:18.533 "name": "BaseBdev3", 00:16:18.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.533 "is_configured": false, 00:16:18.533 "data_offset": 0, 00:16:18.533 "data_size": 0 00:16:18.533 }, 00:16:18.533 { 00:16:18.534 "name": "BaseBdev4", 00:16:18.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.534 "is_configured": false, 00:16:18.534 "data_offset": 0, 00:16:18.534 "data_size": 0 00:16:18.534 } 00:16:18.534 ] 00:16:18.534 }' 00:16:18.534 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.534 10:23:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.101 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:19.360 [2024-07-15 10:23:43.918087] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.360 [2024-07-15 10:23:43.918108] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8dff60 name Existed_Raid, state configuring 00:16:19.360 10:23:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:19.360 [2024-07-15 10:23:44.090553] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.360 [2024-07-15 10:23:44.090572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.360 [2024-07-15 10:23:44.090578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.360 [2024-07-15 10:23:44.090585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.360 [2024-07-15 10:23:44.090590] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:19.360 [2024-07-15 10:23:44.090597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:19.360 [2024-07-15 10:23:44.090602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:19.360 [2024-07-15 10:23:44.090609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:19.360 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.618 [2024-07-15 10:23:44.263337] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.618 BaseBdev1 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:19.618 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.876 [ 00:16:19.876 { 00:16:19.876 "name": "BaseBdev1", 00:16:19.876 "aliases": [ 00:16:19.876 "706512fc-6c3e-49a0-a643-22e8c11e6b55" 00:16:19.876 ], 00:16:19.876 "product_name": "Malloc disk", 00:16:19.876 "block_size": 512, 00:16:19.876 "num_blocks": 65536, 00:16:19.876 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:19.876 "assigned_rate_limits": { 00:16:19.876 "rw_ios_per_sec": 0, 00:16:19.876 "rw_mbytes_per_sec": 0, 00:16:19.876 "r_mbytes_per_sec": 0, 00:16:19.876 "w_mbytes_per_sec": 0 00:16:19.876 }, 00:16:19.876 "claimed": true, 00:16:19.876 "claim_type": "exclusive_write", 00:16:19.876 "zoned": false, 00:16:19.876 "supported_io_types": { 00:16:19.876 "read": true, 00:16:19.876 "write": true, 00:16:19.876 "unmap": true, 00:16:19.876 "flush": true, 00:16:19.876 "reset": true, 00:16:19.876 "nvme_admin": false, 00:16:19.876 "nvme_io": false, 00:16:19.876 "nvme_io_md": false, 00:16:19.876 "write_zeroes": true, 00:16:19.876 "zcopy": true, 00:16:19.876 "get_zone_info": false, 00:16:19.876 "zone_management": false, 00:16:19.876 "zone_append": false, 00:16:19.876 "compare": false, 00:16:19.876 "compare_and_write": false, 00:16:19.876 "abort": true, 00:16:19.876 "seek_hole": false, 00:16:19.876 "seek_data": false, 00:16:19.876 "copy": true, 00:16:19.876 "nvme_iov_md": false 00:16:19.876 }, 00:16:19.876 "memory_domains": [ 00:16:19.876 { 00:16:19.876 "dma_device_id": "system", 00:16:19.876 "dma_device_type": 1 00:16:19.876 }, 00:16:19.876 { 00:16:19.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.876 "dma_device_type": 2 00:16:19.876 } 00:16:19.876 ], 00:16:19.876 "driver_specific": {} 00:16:19.876 } 00:16:19.876 ] 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.876 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.877 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.135 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.135 "name": "Existed_Raid", 00:16:20.135 "uuid": "0f821aa0-7907-4ee8-96e8-de5ef2439ac6", 00:16:20.135 "strip_size_kb": 64, 00:16:20.135 "state": "configuring", 00:16:20.135 "raid_level": "concat", 00:16:20.135 "superblock": true, 00:16:20.135 "num_base_bdevs": 4, 00:16:20.135 "num_base_bdevs_discovered": 1, 00:16:20.135 "num_base_bdevs_operational": 4, 00:16:20.135 "base_bdevs_list": [ 00:16:20.135 { 00:16:20.135 "name": "BaseBdev1", 00:16:20.135 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:20.135 "is_configured": true, 00:16:20.135 "data_offset": 2048, 00:16:20.135 "data_size": 63488 00:16:20.135 }, 00:16:20.135 { 00:16:20.135 "name": "BaseBdev2", 00:16:20.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.135 "is_configured": false, 00:16:20.135 "data_offset": 0, 00:16:20.135 "data_size": 0 00:16:20.135 }, 00:16:20.135 { 00:16:20.135 "name": "BaseBdev3", 00:16:20.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.135 "is_configured": false, 00:16:20.135 "data_offset": 0, 00:16:20.135 "data_size": 0 00:16:20.135 }, 00:16:20.135 { 00:16:20.135 "name": "BaseBdev4", 00:16:20.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.135 "is_configured": false, 00:16:20.135 "data_offset": 0, 00:16:20.135 "data_size": 0 00:16:20.135 } 00:16:20.135 ] 00:16:20.135 }' 00:16:20.135 10:23:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.135 10:23:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.701 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.701 [2024-07-15 10:23:45.446378] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.701 [2024-07-15 10:23:45.446407] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8df7d0 name Existed_Raid, state configuring 00:16:20.701 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:20.958 [2024-07-15 10:23:45.610844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.958 [2024-07-15 10:23:45.611896] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.958 [2024-07-15 10:23:45.611930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.958 [2024-07-15 10:23:45.611937] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.958 [2024-07-15 10:23:45.611944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.958 [2024-07-15 10:23:45.611949] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:20.958 [2024-07-15 10:23:45.611956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.958 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.215 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.215 "name": "Existed_Raid", 00:16:21.215 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:21.215 "strip_size_kb": 64, 00:16:21.215 "state": "configuring", 00:16:21.215 "raid_level": "concat", 00:16:21.215 "superblock": true, 00:16:21.215 "num_base_bdevs": 4, 00:16:21.215 "num_base_bdevs_discovered": 1, 00:16:21.215 "num_base_bdevs_operational": 4, 00:16:21.215 "base_bdevs_list": [ 00:16:21.215 { 00:16:21.215 "name": "BaseBdev1", 00:16:21.215 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:21.215 "is_configured": true, 00:16:21.215 "data_offset": 2048, 00:16:21.215 "data_size": 63488 00:16:21.215 }, 00:16:21.215 { 00:16:21.215 "name": "BaseBdev2", 00:16:21.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.215 "is_configured": false, 00:16:21.215 "data_offset": 0, 00:16:21.215 "data_size": 0 00:16:21.215 }, 00:16:21.215 { 00:16:21.215 "name": "BaseBdev3", 00:16:21.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.215 "is_configured": false, 00:16:21.215 "data_offset": 0, 00:16:21.215 "data_size": 0 00:16:21.215 }, 00:16:21.215 { 00:16:21.215 "name": "BaseBdev4", 00:16:21.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.215 "is_configured": false, 00:16:21.215 "data_offset": 0, 00:16:21.215 "data_size": 0 00:16:21.215 } 00:16:21.215 ] 00:16:21.215 }' 00:16:21.215 10:23:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.215 10:23:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.781 [2024-07-15 10:23:46.435715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.781 BaseBdev2 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:21.781 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.037 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.037 [ 00:16:22.037 { 00:16:22.037 "name": "BaseBdev2", 00:16:22.037 "aliases": [ 00:16:22.037 "5414653b-4a35-4f1c-b79a-121d0a0546a9" 00:16:22.037 ], 00:16:22.037 "product_name": "Malloc disk", 00:16:22.037 "block_size": 512, 00:16:22.037 "num_blocks": 65536, 00:16:22.037 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:22.037 "assigned_rate_limits": { 00:16:22.037 "rw_ios_per_sec": 0, 00:16:22.037 "rw_mbytes_per_sec": 0, 00:16:22.037 "r_mbytes_per_sec": 0, 00:16:22.037 "w_mbytes_per_sec": 0 00:16:22.037 }, 00:16:22.037 "claimed": true, 00:16:22.037 "claim_type": "exclusive_write", 00:16:22.037 "zoned": false, 00:16:22.037 "supported_io_types": { 00:16:22.037 "read": true, 00:16:22.037 "write": true, 00:16:22.037 "unmap": true, 00:16:22.038 "flush": true, 00:16:22.038 "reset": true, 00:16:22.038 "nvme_admin": false, 00:16:22.038 "nvme_io": false, 00:16:22.038 "nvme_io_md": false, 00:16:22.038 "write_zeroes": true, 00:16:22.038 "zcopy": true, 00:16:22.038 "get_zone_info": false, 00:16:22.038 "zone_management": false, 00:16:22.038 "zone_append": false, 00:16:22.038 "compare": false, 00:16:22.038 "compare_and_write": false, 00:16:22.038 "abort": true, 00:16:22.038 "seek_hole": false, 00:16:22.038 "seek_data": false, 00:16:22.038 "copy": true, 00:16:22.038 "nvme_iov_md": false 00:16:22.038 }, 00:16:22.038 "memory_domains": [ 00:16:22.038 { 00:16:22.038 "dma_device_id": "system", 00:16:22.038 "dma_device_type": 1 00:16:22.038 }, 00:16:22.038 { 00:16:22.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.038 "dma_device_type": 2 00:16:22.038 } 00:16:22.038 ], 00:16:22.038 "driver_specific": {} 00:16:22.038 } 00:16:22.038 ] 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.038 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.296 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.296 "name": "Existed_Raid", 00:16:22.296 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:22.296 "strip_size_kb": 64, 00:16:22.296 "state": "configuring", 00:16:22.296 "raid_level": "concat", 00:16:22.296 "superblock": true, 00:16:22.296 "num_base_bdevs": 4, 00:16:22.296 "num_base_bdevs_discovered": 2, 00:16:22.296 "num_base_bdevs_operational": 4, 00:16:22.296 "base_bdevs_list": [ 00:16:22.296 { 00:16:22.296 "name": "BaseBdev1", 00:16:22.296 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:22.296 "is_configured": true, 00:16:22.296 "data_offset": 2048, 00:16:22.296 "data_size": 63488 00:16:22.296 }, 00:16:22.296 { 00:16:22.296 "name": "BaseBdev2", 00:16:22.296 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:22.296 "is_configured": true, 00:16:22.296 "data_offset": 2048, 00:16:22.296 "data_size": 63488 00:16:22.296 }, 00:16:22.296 { 00:16:22.296 "name": "BaseBdev3", 00:16:22.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.296 "is_configured": false, 00:16:22.296 "data_offset": 0, 00:16:22.296 "data_size": 0 00:16:22.296 }, 00:16:22.296 { 00:16:22.296 "name": "BaseBdev4", 00:16:22.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.296 "is_configured": false, 00:16:22.296 "data_offset": 0, 00:16:22.296 "data_size": 0 00:16:22.296 } 00:16:22.296 ] 00:16:22.296 }' 00:16:22.296 10:23:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.296 10:23:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.861 [2024-07-15 10:23:47.589400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.861 BaseBdev3 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:22.861 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.118 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:23.376 [ 00:16:23.376 { 00:16:23.376 "name": "BaseBdev3", 00:16:23.376 "aliases": [ 00:16:23.376 "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178" 00:16:23.376 ], 00:16:23.376 "product_name": "Malloc disk", 00:16:23.376 "block_size": 512, 00:16:23.376 "num_blocks": 65536, 00:16:23.376 "uuid": "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178", 00:16:23.376 "assigned_rate_limits": { 00:16:23.376 "rw_ios_per_sec": 0, 00:16:23.376 "rw_mbytes_per_sec": 0, 00:16:23.376 "r_mbytes_per_sec": 0, 00:16:23.376 "w_mbytes_per_sec": 0 00:16:23.376 }, 00:16:23.376 "claimed": true, 00:16:23.376 "claim_type": "exclusive_write", 00:16:23.376 "zoned": false, 00:16:23.376 "supported_io_types": { 00:16:23.376 "read": true, 00:16:23.376 "write": true, 00:16:23.376 "unmap": true, 00:16:23.376 "flush": true, 00:16:23.376 "reset": true, 00:16:23.376 "nvme_admin": false, 00:16:23.376 "nvme_io": false, 00:16:23.376 "nvme_io_md": false, 00:16:23.376 "write_zeroes": true, 00:16:23.376 "zcopy": true, 00:16:23.376 "get_zone_info": false, 00:16:23.376 "zone_management": false, 00:16:23.376 "zone_append": false, 00:16:23.376 "compare": false, 00:16:23.376 "compare_and_write": false, 00:16:23.376 "abort": true, 00:16:23.376 "seek_hole": false, 00:16:23.376 "seek_data": false, 00:16:23.376 "copy": true, 00:16:23.376 "nvme_iov_md": false 00:16:23.376 }, 00:16:23.376 "memory_domains": [ 00:16:23.376 { 00:16:23.376 "dma_device_id": "system", 00:16:23.376 "dma_device_type": 1 00:16:23.376 }, 00:16:23.376 { 00:16:23.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.376 "dma_device_type": 2 00:16:23.376 } 00:16:23.376 ], 00:16:23.376 "driver_specific": {} 00:16:23.376 } 00:16:23.376 ] 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.376 10:23:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.376 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.376 "name": "Existed_Raid", 00:16:23.376 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:23.376 "strip_size_kb": 64, 00:16:23.376 "state": "configuring", 00:16:23.376 "raid_level": "concat", 00:16:23.376 "superblock": true, 00:16:23.376 "num_base_bdevs": 4, 00:16:23.376 "num_base_bdevs_discovered": 3, 00:16:23.376 "num_base_bdevs_operational": 4, 00:16:23.376 "base_bdevs_list": [ 00:16:23.376 { 00:16:23.376 "name": "BaseBdev1", 00:16:23.376 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:23.376 "is_configured": true, 00:16:23.376 "data_offset": 2048, 00:16:23.376 "data_size": 63488 00:16:23.376 }, 00:16:23.376 { 00:16:23.376 "name": "BaseBdev2", 00:16:23.376 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:23.376 "is_configured": true, 00:16:23.376 "data_offset": 2048, 00:16:23.376 "data_size": 63488 00:16:23.376 }, 00:16:23.376 { 00:16:23.376 "name": "BaseBdev3", 00:16:23.376 "uuid": "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178", 00:16:23.376 "is_configured": true, 00:16:23.376 "data_offset": 2048, 00:16:23.376 "data_size": 63488 00:16:23.376 }, 00:16:23.376 { 00:16:23.376 "name": "BaseBdev4", 00:16:23.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.376 "is_configured": false, 00:16:23.376 "data_offset": 0, 00:16:23.376 "data_size": 0 00:16:23.376 } 00:16:23.376 ] 00:16:23.376 }' 00:16:23.376 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.376 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.941 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:24.199 [2024-07-15 10:23:48.731031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.199 [2024-07-15 10:23:48.731151] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8e0830 00:16:24.199 [2024-07-15 10:23:48.731161] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:24.199 [2024-07-15 10:23:48.731286] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8d71e0 00:16:24.199 [2024-07-15 10:23:48.731370] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8e0830 00:16:24.199 [2024-07-15 10:23:48.731377] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x8e0830 00:16:24.199 [2024-07-15 10:23:48.731440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.199 BaseBdev4 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.199 10:23:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:24.457 [ 00:16:24.457 { 00:16:24.457 "name": "BaseBdev4", 00:16:24.457 "aliases": [ 00:16:24.457 "f2640261-d72b-46cb-9287-5970bb0afae8" 00:16:24.457 ], 00:16:24.457 "product_name": "Malloc disk", 00:16:24.457 "block_size": 512, 00:16:24.457 "num_blocks": 65536, 00:16:24.457 "uuid": "f2640261-d72b-46cb-9287-5970bb0afae8", 00:16:24.457 "assigned_rate_limits": { 00:16:24.457 "rw_ios_per_sec": 0, 00:16:24.457 "rw_mbytes_per_sec": 0, 00:16:24.457 "r_mbytes_per_sec": 0, 00:16:24.457 "w_mbytes_per_sec": 0 00:16:24.457 }, 00:16:24.457 "claimed": true, 00:16:24.457 "claim_type": "exclusive_write", 00:16:24.457 "zoned": false, 00:16:24.457 "supported_io_types": { 00:16:24.457 "read": true, 00:16:24.457 "write": true, 00:16:24.457 "unmap": true, 00:16:24.457 "flush": true, 00:16:24.457 "reset": true, 00:16:24.457 "nvme_admin": false, 00:16:24.457 "nvme_io": false, 00:16:24.457 "nvme_io_md": false, 00:16:24.457 "write_zeroes": true, 00:16:24.457 "zcopy": true, 00:16:24.457 "get_zone_info": false, 00:16:24.457 "zone_management": false, 00:16:24.457 "zone_append": false, 00:16:24.457 "compare": false, 00:16:24.457 "compare_and_write": false, 00:16:24.457 "abort": true, 00:16:24.457 "seek_hole": false, 00:16:24.457 "seek_data": false, 00:16:24.457 "copy": true, 00:16:24.457 "nvme_iov_md": false 00:16:24.457 }, 00:16:24.457 "memory_domains": [ 00:16:24.457 { 00:16:24.457 "dma_device_id": "system", 00:16:24.457 "dma_device_type": 1 00:16:24.457 }, 00:16:24.457 { 00:16:24.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.457 "dma_device_type": 2 00:16:24.457 } 00:16:24.457 ], 00:16:24.457 "driver_specific": {} 00:16:24.457 } 00:16:24.457 ] 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.457 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.715 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.715 "name": "Existed_Raid", 00:16:24.715 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:24.715 "strip_size_kb": 64, 00:16:24.715 "state": "online", 00:16:24.715 "raid_level": "concat", 00:16:24.715 "superblock": true, 00:16:24.715 "num_base_bdevs": 4, 00:16:24.715 "num_base_bdevs_discovered": 4, 00:16:24.715 "num_base_bdevs_operational": 4, 00:16:24.715 "base_bdevs_list": [ 00:16:24.715 { 00:16:24.715 "name": "BaseBdev1", 00:16:24.715 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:24.715 "is_configured": true, 00:16:24.715 "data_offset": 2048, 00:16:24.715 "data_size": 63488 00:16:24.715 }, 00:16:24.715 { 00:16:24.715 "name": "BaseBdev2", 00:16:24.715 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:24.715 "is_configured": true, 00:16:24.715 "data_offset": 2048, 00:16:24.715 "data_size": 63488 00:16:24.715 }, 00:16:24.715 { 00:16:24.715 "name": "BaseBdev3", 00:16:24.715 "uuid": "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178", 00:16:24.715 "is_configured": true, 00:16:24.715 "data_offset": 2048, 00:16:24.715 "data_size": 63488 00:16:24.715 }, 00:16:24.715 { 00:16:24.715 "name": "BaseBdev4", 00:16:24.715 "uuid": "f2640261-d72b-46cb-9287-5970bb0afae8", 00:16:24.715 "is_configured": true, 00:16:24.715 "data_offset": 2048, 00:16:24.715 "data_size": 63488 00:16:24.715 } 00:16:24.715 ] 00:16:24.715 }' 00:16:24.715 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.715 10:23:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:25.280 [2024-07-15 10:23:49.938353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:25.280 "name": "Existed_Raid", 00:16:25.280 "aliases": [ 00:16:25.280 "cee09de8-1210-4a41-ab3e-88f8e6af64d4" 00:16:25.280 ], 00:16:25.280 "product_name": "Raid Volume", 00:16:25.280 "block_size": 512, 00:16:25.280 "num_blocks": 253952, 00:16:25.280 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:25.280 "assigned_rate_limits": { 00:16:25.280 "rw_ios_per_sec": 0, 00:16:25.280 "rw_mbytes_per_sec": 0, 00:16:25.280 "r_mbytes_per_sec": 0, 00:16:25.280 "w_mbytes_per_sec": 0 00:16:25.280 }, 00:16:25.280 "claimed": false, 00:16:25.280 "zoned": false, 00:16:25.280 "supported_io_types": { 00:16:25.280 "read": true, 00:16:25.280 "write": true, 00:16:25.280 "unmap": true, 00:16:25.280 "flush": true, 00:16:25.280 "reset": true, 00:16:25.280 "nvme_admin": false, 00:16:25.280 "nvme_io": false, 00:16:25.280 "nvme_io_md": false, 00:16:25.280 "write_zeroes": true, 00:16:25.280 "zcopy": false, 00:16:25.280 "get_zone_info": false, 00:16:25.280 "zone_management": false, 00:16:25.280 "zone_append": false, 00:16:25.280 "compare": false, 00:16:25.280 "compare_and_write": false, 00:16:25.280 "abort": false, 00:16:25.280 "seek_hole": false, 00:16:25.280 "seek_data": false, 00:16:25.280 "copy": false, 00:16:25.280 "nvme_iov_md": false 00:16:25.280 }, 00:16:25.280 "memory_domains": [ 00:16:25.280 { 00:16:25.280 "dma_device_id": "system", 00:16:25.280 "dma_device_type": 1 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.280 "dma_device_type": 2 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "system", 00:16:25.280 "dma_device_type": 1 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.280 "dma_device_type": 2 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "system", 00:16:25.280 "dma_device_type": 1 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.280 "dma_device_type": 2 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "system", 00:16:25.280 "dma_device_type": 1 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.280 "dma_device_type": 2 00:16:25.280 } 00:16:25.280 ], 00:16:25.280 "driver_specific": { 00:16:25.280 "raid": { 00:16:25.280 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:25.280 "strip_size_kb": 64, 00:16:25.280 "state": "online", 00:16:25.280 "raid_level": "concat", 00:16:25.280 "superblock": true, 00:16:25.280 "num_base_bdevs": 4, 00:16:25.280 "num_base_bdevs_discovered": 4, 00:16:25.280 "num_base_bdevs_operational": 4, 00:16:25.280 "base_bdevs_list": [ 00:16:25.280 { 00:16:25.280 "name": "BaseBdev1", 00:16:25.280 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:25.280 "is_configured": true, 00:16:25.280 "data_offset": 2048, 00:16:25.280 "data_size": 63488 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "name": "BaseBdev2", 00:16:25.280 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:25.280 "is_configured": true, 00:16:25.280 "data_offset": 2048, 00:16:25.280 "data_size": 63488 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "name": "BaseBdev3", 00:16:25.280 "uuid": "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178", 00:16:25.280 "is_configured": true, 00:16:25.280 "data_offset": 2048, 00:16:25.280 "data_size": 63488 00:16:25.280 }, 00:16:25.280 { 00:16:25.280 "name": "BaseBdev4", 00:16:25.280 "uuid": "f2640261-d72b-46cb-9287-5970bb0afae8", 00:16:25.280 "is_configured": true, 00:16:25.280 "data_offset": 2048, 00:16:25.280 "data_size": 63488 00:16:25.280 } 00:16:25.280 ] 00:16:25.280 } 00:16:25.280 } 00:16:25.280 }' 00:16:25.280 10:23:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.280 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:25.280 BaseBdev2 00:16:25.280 BaseBdev3 00:16:25.280 BaseBdev4' 00:16:25.280 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.280 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:25.280 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.577 "name": "BaseBdev1", 00:16:25.577 "aliases": [ 00:16:25.577 "706512fc-6c3e-49a0-a643-22e8c11e6b55" 00:16:25.577 ], 00:16:25.577 "product_name": "Malloc disk", 00:16:25.577 "block_size": 512, 00:16:25.577 "num_blocks": 65536, 00:16:25.577 "uuid": "706512fc-6c3e-49a0-a643-22e8c11e6b55", 00:16:25.577 "assigned_rate_limits": { 00:16:25.577 "rw_ios_per_sec": 0, 00:16:25.577 "rw_mbytes_per_sec": 0, 00:16:25.577 "r_mbytes_per_sec": 0, 00:16:25.577 "w_mbytes_per_sec": 0 00:16:25.577 }, 00:16:25.577 "claimed": true, 00:16:25.577 "claim_type": "exclusive_write", 00:16:25.577 "zoned": false, 00:16:25.577 "supported_io_types": { 00:16:25.577 "read": true, 00:16:25.577 "write": true, 00:16:25.577 "unmap": true, 00:16:25.577 "flush": true, 00:16:25.577 "reset": true, 00:16:25.577 "nvme_admin": false, 00:16:25.577 "nvme_io": false, 00:16:25.577 "nvme_io_md": false, 00:16:25.577 "write_zeroes": true, 00:16:25.577 "zcopy": true, 00:16:25.577 "get_zone_info": false, 00:16:25.577 "zone_management": false, 00:16:25.577 "zone_append": false, 00:16:25.577 "compare": false, 00:16:25.577 "compare_and_write": false, 00:16:25.577 "abort": true, 00:16:25.577 "seek_hole": false, 00:16:25.577 "seek_data": false, 00:16:25.577 "copy": true, 00:16:25.577 "nvme_iov_md": false 00:16:25.577 }, 00:16:25.577 "memory_domains": [ 00:16:25.577 { 00:16:25.577 "dma_device_id": "system", 00:16:25.577 "dma_device_type": 1 00:16:25.577 }, 00:16:25.577 { 00:16:25.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.577 "dma_device_type": 2 00:16:25.577 } 00:16:25.577 ], 00:16:25.577 "driver_specific": {} 00:16:25.577 }' 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.577 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.835 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.835 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.836 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.836 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.836 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.836 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.836 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.836 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:26.093 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.093 "name": "BaseBdev2", 00:16:26.093 "aliases": [ 00:16:26.093 "5414653b-4a35-4f1c-b79a-121d0a0546a9" 00:16:26.093 ], 00:16:26.093 "product_name": "Malloc disk", 00:16:26.093 "block_size": 512, 00:16:26.093 "num_blocks": 65536, 00:16:26.093 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:26.093 "assigned_rate_limits": { 00:16:26.093 "rw_ios_per_sec": 0, 00:16:26.093 "rw_mbytes_per_sec": 0, 00:16:26.093 "r_mbytes_per_sec": 0, 00:16:26.093 "w_mbytes_per_sec": 0 00:16:26.093 }, 00:16:26.094 "claimed": true, 00:16:26.094 "claim_type": "exclusive_write", 00:16:26.094 "zoned": false, 00:16:26.094 "supported_io_types": { 00:16:26.094 "read": true, 00:16:26.094 "write": true, 00:16:26.094 "unmap": true, 00:16:26.094 "flush": true, 00:16:26.094 "reset": true, 00:16:26.094 "nvme_admin": false, 00:16:26.094 "nvme_io": false, 00:16:26.094 "nvme_io_md": false, 00:16:26.094 "write_zeroes": true, 00:16:26.094 "zcopy": true, 00:16:26.094 "get_zone_info": false, 00:16:26.094 "zone_management": false, 00:16:26.094 "zone_append": false, 00:16:26.094 "compare": false, 00:16:26.094 "compare_and_write": false, 00:16:26.094 "abort": true, 00:16:26.094 "seek_hole": false, 00:16:26.094 "seek_data": false, 00:16:26.094 "copy": true, 00:16:26.094 "nvme_iov_md": false 00:16:26.094 }, 00:16:26.094 "memory_domains": [ 00:16:26.094 { 00:16:26.094 "dma_device_id": "system", 00:16:26.094 "dma_device_type": 1 00:16:26.094 }, 00:16:26.094 { 00:16:26.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.094 "dma_device_type": 2 00:16:26.094 } 00:16:26.094 ], 00:16:26.094 "driver_specific": {} 00:16:26.094 }' 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.094 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:26.350 10:23:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.644 "name": "BaseBdev3", 00:16:26.644 "aliases": [ 00:16:26.644 "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178" 00:16:26.644 ], 00:16:26.644 "product_name": "Malloc disk", 00:16:26.644 "block_size": 512, 00:16:26.644 "num_blocks": 65536, 00:16:26.644 "uuid": "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178", 00:16:26.644 "assigned_rate_limits": { 00:16:26.644 "rw_ios_per_sec": 0, 00:16:26.644 "rw_mbytes_per_sec": 0, 00:16:26.644 "r_mbytes_per_sec": 0, 00:16:26.644 "w_mbytes_per_sec": 0 00:16:26.644 }, 00:16:26.644 "claimed": true, 00:16:26.644 "claim_type": "exclusive_write", 00:16:26.644 "zoned": false, 00:16:26.644 "supported_io_types": { 00:16:26.644 "read": true, 00:16:26.644 "write": true, 00:16:26.644 "unmap": true, 00:16:26.644 "flush": true, 00:16:26.644 "reset": true, 00:16:26.644 "nvme_admin": false, 00:16:26.644 "nvme_io": false, 00:16:26.644 "nvme_io_md": false, 00:16:26.644 "write_zeroes": true, 00:16:26.644 "zcopy": true, 00:16:26.644 "get_zone_info": false, 00:16:26.644 "zone_management": false, 00:16:26.644 "zone_append": false, 00:16:26.644 "compare": false, 00:16:26.644 "compare_and_write": false, 00:16:26.644 "abort": true, 00:16:26.644 "seek_hole": false, 00:16:26.644 "seek_data": false, 00:16:26.644 "copy": true, 00:16:26.644 "nvme_iov_md": false 00:16:26.644 }, 00:16:26.644 "memory_domains": [ 00:16:26.644 { 00:16:26.644 "dma_device_id": "system", 00:16:26.644 "dma_device_type": 1 00:16:26.644 }, 00:16:26.644 { 00:16:26.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.644 "dma_device_type": 2 00:16:26.644 } 00:16:26.644 ], 00:16:26.644 "driver_specific": {} 00:16:26.644 }' 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.644 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.902 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.902 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.902 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:26.902 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.902 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.902 "name": "BaseBdev4", 00:16:26.902 "aliases": [ 00:16:26.902 "f2640261-d72b-46cb-9287-5970bb0afae8" 00:16:26.902 ], 00:16:26.902 "product_name": "Malloc disk", 00:16:26.902 "block_size": 512, 00:16:26.902 "num_blocks": 65536, 00:16:26.902 "uuid": "f2640261-d72b-46cb-9287-5970bb0afae8", 00:16:26.902 "assigned_rate_limits": { 00:16:26.902 "rw_ios_per_sec": 0, 00:16:26.902 "rw_mbytes_per_sec": 0, 00:16:26.902 "r_mbytes_per_sec": 0, 00:16:26.902 "w_mbytes_per_sec": 0 00:16:26.902 }, 00:16:26.902 "claimed": true, 00:16:26.902 "claim_type": "exclusive_write", 00:16:26.902 "zoned": false, 00:16:26.902 "supported_io_types": { 00:16:26.902 "read": true, 00:16:26.902 "write": true, 00:16:26.902 "unmap": true, 00:16:26.902 "flush": true, 00:16:26.902 "reset": true, 00:16:26.902 "nvme_admin": false, 00:16:26.902 "nvme_io": false, 00:16:26.902 "nvme_io_md": false, 00:16:26.902 "write_zeroes": true, 00:16:26.902 "zcopy": true, 00:16:26.902 "get_zone_info": false, 00:16:26.902 "zone_management": false, 00:16:26.902 "zone_append": false, 00:16:26.902 "compare": false, 00:16:26.902 "compare_and_write": false, 00:16:26.902 "abort": true, 00:16:26.902 "seek_hole": false, 00:16:26.902 "seek_data": false, 00:16:26.902 "copy": true, 00:16:26.903 "nvme_iov_md": false 00:16:26.903 }, 00:16:26.903 "memory_domains": [ 00:16:26.903 { 00:16:26.903 "dma_device_id": "system", 00:16:26.903 "dma_device_type": 1 00:16:26.903 }, 00:16:26.903 { 00:16:26.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.903 "dma_device_type": 2 00:16:26.903 } 00:16:26.903 ], 00:16:26.903 "driver_specific": {} 00:16:26.903 }' 00:16:26.903 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.903 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:27.160 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:27.161 10:23:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:27.419 [2024-07-15 10:23:52.095739] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.419 [2024-07-15 10:23:52.095759] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.419 [2024-07-15 10:23:52.095790] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.419 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.678 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.678 "name": "Existed_Raid", 00:16:27.678 "uuid": "cee09de8-1210-4a41-ab3e-88f8e6af64d4", 00:16:27.678 "strip_size_kb": 64, 00:16:27.678 "state": "offline", 00:16:27.678 "raid_level": "concat", 00:16:27.678 "superblock": true, 00:16:27.678 "num_base_bdevs": 4, 00:16:27.678 "num_base_bdevs_discovered": 3, 00:16:27.678 "num_base_bdevs_operational": 3, 00:16:27.678 "base_bdevs_list": [ 00:16:27.678 { 00:16:27.678 "name": null, 00:16:27.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.678 "is_configured": false, 00:16:27.678 "data_offset": 2048, 00:16:27.678 "data_size": 63488 00:16:27.678 }, 00:16:27.678 { 00:16:27.678 "name": "BaseBdev2", 00:16:27.678 "uuid": "5414653b-4a35-4f1c-b79a-121d0a0546a9", 00:16:27.678 "is_configured": true, 00:16:27.678 "data_offset": 2048, 00:16:27.678 "data_size": 63488 00:16:27.678 }, 00:16:27.678 { 00:16:27.678 "name": "BaseBdev3", 00:16:27.678 "uuid": "8ac180ea-9cfb-433c-ac0f-e10d2a6c1178", 00:16:27.678 "is_configured": true, 00:16:27.678 "data_offset": 2048, 00:16:27.678 "data_size": 63488 00:16:27.678 }, 00:16:27.678 { 00:16:27.678 "name": "BaseBdev4", 00:16:27.678 "uuid": "f2640261-d72b-46cb-9287-5970bb0afae8", 00:16:27.678 "is_configured": true, 00:16:27.678 "data_offset": 2048, 00:16:27.678 "data_size": 63488 00:16:27.678 } 00:16:27.678 ] 00:16:27.678 }' 00:16:27.678 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.678 10:23:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.245 10:23:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:28.504 [2024-07-15 10:23:53.079065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.504 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:28.762 [2024-07-15 10:23:53.417365] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.762 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:28.762 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:28.762 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.762 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:29.020 [2024-07-15 10:23:53.767914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:29.020 [2024-07-15 10:23:53.767941] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8e0830 name Existed_Raid, state offline 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.020 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:29.278 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:29.278 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:29.278 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:29.278 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:29.278 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:29.278 10:23:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:29.536 BaseBdev2 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.536 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:29.795 [ 00:16:29.795 { 00:16:29.795 "name": "BaseBdev2", 00:16:29.795 "aliases": [ 00:16:29.795 "02602889-190d-4488-9d47-e80184159062" 00:16:29.795 ], 00:16:29.795 "product_name": "Malloc disk", 00:16:29.795 "block_size": 512, 00:16:29.795 "num_blocks": 65536, 00:16:29.795 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:29.795 "assigned_rate_limits": { 00:16:29.795 "rw_ios_per_sec": 0, 00:16:29.795 "rw_mbytes_per_sec": 0, 00:16:29.795 "r_mbytes_per_sec": 0, 00:16:29.795 "w_mbytes_per_sec": 0 00:16:29.795 }, 00:16:29.795 "claimed": false, 00:16:29.795 "zoned": false, 00:16:29.795 "supported_io_types": { 00:16:29.795 "read": true, 00:16:29.795 "write": true, 00:16:29.795 "unmap": true, 00:16:29.795 "flush": true, 00:16:29.795 "reset": true, 00:16:29.795 "nvme_admin": false, 00:16:29.795 "nvme_io": false, 00:16:29.795 "nvme_io_md": false, 00:16:29.795 "write_zeroes": true, 00:16:29.795 "zcopy": true, 00:16:29.795 "get_zone_info": false, 00:16:29.795 "zone_management": false, 00:16:29.795 "zone_append": false, 00:16:29.795 "compare": false, 00:16:29.795 "compare_and_write": false, 00:16:29.795 "abort": true, 00:16:29.795 "seek_hole": false, 00:16:29.795 "seek_data": false, 00:16:29.795 "copy": true, 00:16:29.795 "nvme_iov_md": false 00:16:29.795 }, 00:16:29.795 "memory_domains": [ 00:16:29.795 { 00:16:29.795 "dma_device_id": "system", 00:16:29.795 "dma_device_type": 1 00:16:29.795 }, 00:16:29.795 { 00:16:29.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.795 "dma_device_type": 2 00:16:29.795 } 00:16:29.795 ], 00:16:29.795 "driver_specific": {} 00:16:29.795 } 00:16:29.795 ] 00:16:29.795 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:29.795 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:29.795 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:29.795 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:30.054 BaseBdev3 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.054 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:30.313 [ 00:16:30.313 { 00:16:30.313 "name": "BaseBdev3", 00:16:30.313 "aliases": [ 00:16:30.313 "99948a44-a846-4bec-a700-43fc6f192c06" 00:16:30.313 ], 00:16:30.313 "product_name": "Malloc disk", 00:16:30.313 "block_size": 512, 00:16:30.313 "num_blocks": 65536, 00:16:30.313 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:30.313 "assigned_rate_limits": { 00:16:30.313 "rw_ios_per_sec": 0, 00:16:30.313 "rw_mbytes_per_sec": 0, 00:16:30.313 "r_mbytes_per_sec": 0, 00:16:30.313 "w_mbytes_per_sec": 0 00:16:30.313 }, 00:16:30.313 "claimed": false, 00:16:30.313 "zoned": false, 00:16:30.313 "supported_io_types": { 00:16:30.313 "read": true, 00:16:30.313 "write": true, 00:16:30.313 "unmap": true, 00:16:30.313 "flush": true, 00:16:30.313 "reset": true, 00:16:30.313 "nvme_admin": false, 00:16:30.313 "nvme_io": false, 00:16:30.313 "nvme_io_md": false, 00:16:30.313 "write_zeroes": true, 00:16:30.313 "zcopy": true, 00:16:30.313 "get_zone_info": false, 00:16:30.313 "zone_management": false, 00:16:30.313 "zone_append": false, 00:16:30.313 "compare": false, 00:16:30.313 "compare_and_write": false, 00:16:30.313 "abort": true, 00:16:30.313 "seek_hole": false, 00:16:30.313 "seek_data": false, 00:16:30.313 "copy": true, 00:16:30.313 "nvme_iov_md": false 00:16:30.313 }, 00:16:30.313 "memory_domains": [ 00:16:30.313 { 00:16:30.313 "dma_device_id": "system", 00:16:30.313 "dma_device_type": 1 00:16:30.313 }, 00:16:30.313 { 00:16:30.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.313 "dma_device_type": 2 00:16:30.313 } 00:16:30.313 ], 00:16:30.313 "driver_specific": {} 00:16:30.313 } 00:16:30.313 ] 00:16:30.313 10:23:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:30.313 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:30.313 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:30.313 10:23:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:30.313 BaseBdev4 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:30.313 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:30.572 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:30.832 [ 00:16:30.832 { 00:16:30.832 "name": "BaseBdev4", 00:16:30.832 "aliases": [ 00:16:30.832 "025e2288-dbef-4a5f-99c7-13fe51f0e895" 00:16:30.832 ], 00:16:30.832 "product_name": "Malloc disk", 00:16:30.832 "block_size": 512, 00:16:30.832 "num_blocks": 65536, 00:16:30.832 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:30.832 "assigned_rate_limits": { 00:16:30.832 "rw_ios_per_sec": 0, 00:16:30.832 "rw_mbytes_per_sec": 0, 00:16:30.832 "r_mbytes_per_sec": 0, 00:16:30.832 "w_mbytes_per_sec": 0 00:16:30.832 }, 00:16:30.832 "claimed": false, 00:16:30.832 "zoned": false, 00:16:30.832 "supported_io_types": { 00:16:30.832 "read": true, 00:16:30.832 "write": true, 00:16:30.832 "unmap": true, 00:16:30.832 "flush": true, 00:16:30.832 "reset": true, 00:16:30.832 "nvme_admin": false, 00:16:30.832 "nvme_io": false, 00:16:30.832 "nvme_io_md": false, 00:16:30.832 "write_zeroes": true, 00:16:30.832 "zcopy": true, 00:16:30.832 "get_zone_info": false, 00:16:30.832 "zone_management": false, 00:16:30.832 "zone_append": false, 00:16:30.832 "compare": false, 00:16:30.832 "compare_and_write": false, 00:16:30.832 "abort": true, 00:16:30.832 "seek_hole": false, 00:16:30.832 "seek_data": false, 00:16:30.832 "copy": true, 00:16:30.832 "nvme_iov_md": false 00:16:30.832 }, 00:16:30.832 "memory_domains": [ 00:16:30.832 { 00:16:30.832 "dma_device_id": "system", 00:16:30.832 "dma_device_type": 1 00:16:30.832 }, 00:16:30.832 { 00:16:30.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.832 "dma_device_type": 2 00:16:30.832 } 00:16:30.832 ], 00:16:30.832 "driver_specific": {} 00:16:30.832 } 00:16:30.832 ] 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:30.832 [2024-07-15 10:23:55.573161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.832 [2024-07-15 10:23:55.573194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.832 [2024-07-15 10:23:55.573206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.832 [2024-07-15 10:23:55.574268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.832 [2024-07-15 10:23:55.574297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.832 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.090 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.090 "name": "Existed_Raid", 00:16:31.090 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:31.090 "strip_size_kb": 64, 00:16:31.090 "state": "configuring", 00:16:31.090 "raid_level": "concat", 00:16:31.090 "superblock": true, 00:16:31.090 "num_base_bdevs": 4, 00:16:31.090 "num_base_bdevs_discovered": 3, 00:16:31.090 "num_base_bdevs_operational": 4, 00:16:31.090 "base_bdevs_list": [ 00:16:31.090 { 00:16:31.090 "name": "BaseBdev1", 00:16:31.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.090 "is_configured": false, 00:16:31.090 "data_offset": 0, 00:16:31.090 "data_size": 0 00:16:31.090 }, 00:16:31.090 { 00:16:31.090 "name": "BaseBdev2", 00:16:31.090 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:31.090 "is_configured": true, 00:16:31.090 "data_offset": 2048, 00:16:31.090 "data_size": 63488 00:16:31.090 }, 00:16:31.090 { 00:16:31.090 "name": "BaseBdev3", 00:16:31.090 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:31.090 "is_configured": true, 00:16:31.090 "data_offset": 2048, 00:16:31.090 "data_size": 63488 00:16:31.090 }, 00:16:31.090 { 00:16:31.090 "name": "BaseBdev4", 00:16:31.090 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:31.090 "is_configured": true, 00:16:31.090 "data_offset": 2048, 00:16:31.090 "data_size": 63488 00:16:31.090 } 00:16:31.090 ] 00:16:31.090 }' 00:16:31.090 10:23:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.090 10:23:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:31.656 [2024-07-15 10:23:56.419319] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.656 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.915 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.915 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.915 "name": "Existed_Raid", 00:16:31.915 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:31.915 "strip_size_kb": 64, 00:16:31.915 "state": "configuring", 00:16:31.915 "raid_level": "concat", 00:16:31.915 "superblock": true, 00:16:31.915 "num_base_bdevs": 4, 00:16:31.915 "num_base_bdevs_discovered": 2, 00:16:31.915 "num_base_bdevs_operational": 4, 00:16:31.915 "base_bdevs_list": [ 00:16:31.915 { 00:16:31.915 "name": "BaseBdev1", 00:16:31.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.915 "is_configured": false, 00:16:31.915 "data_offset": 0, 00:16:31.915 "data_size": 0 00:16:31.915 }, 00:16:31.915 { 00:16:31.915 "name": null, 00:16:31.915 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:31.915 "is_configured": false, 00:16:31.915 "data_offset": 2048, 00:16:31.915 "data_size": 63488 00:16:31.915 }, 00:16:31.915 { 00:16:31.915 "name": "BaseBdev3", 00:16:31.915 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:31.915 "is_configured": true, 00:16:31.915 "data_offset": 2048, 00:16:31.915 "data_size": 63488 00:16:31.915 }, 00:16:31.915 { 00:16:31.915 "name": "BaseBdev4", 00:16:31.915 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:31.915 "is_configured": true, 00:16:31.915 "data_offset": 2048, 00:16:31.915 "data_size": 63488 00:16:31.915 } 00:16:31.915 ] 00:16:31.915 }' 00:16:31.915 10:23:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.915 10:23:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.482 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.482 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.482 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:32.482 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.740 [2024-07-15 10:23:57.388664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.740 BaseBdev1 00:16:32.740 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:32.741 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:32.741 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:32.741 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:32.741 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:32.741 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:32.741 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.000 [ 00:16:33.000 { 00:16:33.000 "name": "BaseBdev1", 00:16:33.000 "aliases": [ 00:16:33.000 "9194d48d-cb3c-488d-8237-53414dad4400" 00:16:33.000 ], 00:16:33.000 "product_name": "Malloc disk", 00:16:33.000 "block_size": 512, 00:16:33.000 "num_blocks": 65536, 00:16:33.000 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:33.000 "assigned_rate_limits": { 00:16:33.000 "rw_ios_per_sec": 0, 00:16:33.000 "rw_mbytes_per_sec": 0, 00:16:33.000 "r_mbytes_per_sec": 0, 00:16:33.000 "w_mbytes_per_sec": 0 00:16:33.000 }, 00:16:33.000 "claimed": true, 00:16:33.000 "claim_type": "exclusive_write", 00:16:33.000 "zoned": false, 00:16:33.000 "supported_io_types": { 00:16:33.000 "read": true, 00:16:33.000 "write": true, 00:16:33.000 "unmap": true, 00:16:33.000 "flush": true, 00:16:33.000 "reset": true, 00:16:33.000 "nvme_admin": false, 00:16:33.000 "nvme_io": false, 00:16:33.000 "nvme_io_md": false, 00:16:33.000 "write_zeroes": true, 00:16:33.000 "zcopy": true, 00:16:33.000 "get_zone_info": false, 00:16:33.000 "zone_management": false, 00:16:33.000 "zone_append": false, 00:16:33.000 "compare": false, 00:16:33.000 "compare_and_write": false, 00:16:33.000 "abort": true, 00:16:33.000 "seek_hole": false, 00:16:33.000 "seek_data": false, 00:16:33.000 "copy": true, 00:16:33.000 "nvme_iov_md": false 00:16:33.000 }, 00:16:33.000 "memory_domains": [ 00:16:33.000 { 00:16:33.000 "dma_device_id": "system", 00:16:33.000 "dma_device_type": 1 00:16:33.000 }, 00:16:33.000 { 00:16:33.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.000 "dma_device_type": 2 00:16:33.000 } 00:16:33.000 ], 00:16:33.000 "driver_specific": {} 00:16:33.000 } 00:16:33.000 ] 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.000 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.259 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.259 "name": "Existed_Raid", 00:16:33.259 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:33.259 "strip_size_kb": 64, 00:16:33.259 "state": "configuring", 00:16:33.259 "raid_level": "concat", 00:16:33.259 "superblock": true, 00:16:33.259 "num_base_bdevs": 4, 00:16:33.259 "num_base_bdevs_discovered": 3, 00:16:33.259 "num_base_bdevs_operational": 4, 00:16:33.259 "base_bdevs_list": [ 00:16:33.259 { 00:16:33.259 "name": "BaseBdev1", 00:16:33.259 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:33.259 "is_configured": true, 00:16:33.259 "data_offset": 2048, 00:16:33.259 "data_size": 63488 00:16:33.259 }, 00:16:33.259 { 00:16:33.259 "name": null, 00:16:33.259 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:33.259 "is_configured": false, 00:16:33.259 "data_offset": 2048, 00:16:33.259 "data_size": 63488 00:16:33.259 }, 00:16:33.259 { 00:16:33.259 "name": "BaseBdev3", 00:16:33.259 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:33.259 "is_configured": true, 00:16:33.259 "data_offset": 2048, 00:16:33.259 "data_size": 63488 00:16:33.259 }, 00:16:33.259 { 00:16:33.259 "name": "BaseBdev4", 00:16:33.259 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:33.259 "is_configured": true, 00:16:33.259 "data_offset": 2048, 00:16:33.259 "data_size": 63488 00:16:33.259 } 00:16:33.259 ] 00:16:33.259 }' 00:16:33.259 10:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.259 10:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.825 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.825 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:33.825 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:33.825 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:34.084 [2024-07-15 10:23:58.704088] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.084 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.343 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.343 "name": "Existed_Raid", 00:16:34.343 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:34.343 "strip_size_kb": 64, 00:16:34.343 "state": "configuring", 00:16:34.344 "raid_level": "concat", 00:16:34.344 "superblock": true, 00:16:34.344 "num_base_bdevs": 4, 00:16:34.344 "num_base_bdevs_discovered": 2, 00:16:34.344 "num_base_bdevs_operational": 4, 00:16:34.344 "base_bdevs_list": [ 00:16:34.344 { 00:16:34.344 "name": "BaseBdev1", 00:16:34.344 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:34.344 "is_configured": true, 00:16:34.344 "data_offset": 2048, 00:16:34.344 "data_size": 63488 00:16:34.344 }, 00:16:34.344 { 00:16:34.344 "name": null, 00:16:34.344 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:34.344 "is_configured": false, 00:16:34.344 "data_offset": 2048, 00:16:34.344 "data_size": 63488 00:16:34.344 }, 00:16:34.344 { 00:16:34.344 "name": null, 00:16:34.344 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:34.344 "is_configured": false, 00:16:34.344 "data_offset": 2048, 00:16:34.344 "data_size": 63488 00:16:34.344 }, 00:16:34.344 { 00:16:34.344 "name": "BaseBdev4", 00:16:34.344 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:34.344 "is_configured": true, 00:16:34.344 "data_offset": 2048, 00:16:34.344 "data_size": 63488 00:16:34.344 } 00:16:34.344 ] 00:16:34.344 }' 00:16:34.344 10:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.344 10:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.602 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.602 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:34.861 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:34.861 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:35.119 [2024-07-15 10:23:59.686607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.119 "name": "Existed_Raid", 00:16:35.119 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:35.119 "strip_size_kb": 64, 00:16:35.119 "state": "configuring", 00:16:35.119 "raid_level": "concat", 00:16:35.119 "superblock": true, 00:16:35.119 "num_base_bdevs": 4, 00:16:35.119 "num_base_bdevs_discovered": 3, 00:16:35.119 "num_base_bdevs_operational": 4, 00:16:35.119 "base_bdevs_list": [ 00:16:35.119 { 00:16:35.119 "name": "BaseBdev1", 00:16:35.119 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:35.119 "is_configured": true, 00:16:35.119 "data_offset": 2048, 00:16:35.119 "data_size": 63488 00:16:35.119 }, 00:16:35.119 { 00:16:35.119 "name": null, 00:16:35.119 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:35.119 "is_configured": false, 00:16:35.119 "data_offset": 2048, 00:16:35.119 "data_size": 63488 00:16:35.119 }, 00:16:35.119 { 00:16:35.119 "name": "BaseBdev3", 00:16:35.119 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:35.119 "is_configured": true, 00:16:35.119 "data_offset": 2048, 00:16:35.119 "data_size": 63488 00:16:35.119 }, 00:16:35.119 { 00:16:35.119 "name": "BaseBdev4", 00:16:35.119 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:35.119 "is_configured": true, 00:16:35.119 "data_offset": 2048, 00:16:35.119 "data_size": 63488 00:16:35.119 } 00:16:35.119 ] 00:16:35.119 }' 00:16:35.119 10:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.120 10:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.685 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.685 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.944 [2024-07-15 10:24:00.645088] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.944 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.202 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.202 "name": "Existed_Raid", 00:16:36.202 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:36.202 "strip_size_kb": 64, 00:16:36.202 "state": "configuring", 00:16:36.202 "raid_level": "concat", 00:16:36.202 "superblock": true, 00:16:36.202 "num_base_bdevs": 4, 00:16:36.202 "num_base_bdevs_discovered": 2, 00:16:36.202 "num_base_bdevs_operational": 4, 00:16:36.202 "base_bdevs_list": [ 00:16:36.202 { 00:16:36.202 "name": null, 00:16:36.202 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:36.202 "is_configured": false, 00:16:36.202 "data_offset": 2048, 00:16:36.202 "data_size": 63488 00:16:36.202 }, 00:16:36.202 { 00:16:36.202 "name": null, 00:16:36.202 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:36.202 "is_configured": false, 00:16:36.202 "data_offset": 2048, 00:16:36.202 "data_size": 63488 00:16:36.202 }, 00:16:36.202 { 00:16:36.202 "name": "BaseBdev3", 00:16:36.202 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:36.202 "is_configured": true, 00:16:36.202 "data_offset": 2048, 00:16:36.202 "data_size": 63488 00:16:36.202 }, 00:16:36.202 { 00:16:36.202 "name": "BaseBdev4", 00:16:36.202 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:36.202 "is_configured": true, 00:16:36.202 "data_offset": 2048, 00:16:36.202 "data_size": 63488 00:16:36.202 } 00:16:36.202 ] 00:16:36.202 }' 00:16:36.202 10:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.202 10:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.765 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.765 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:36.765 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:36.765 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.023 [2024-07-15 10:24:01.669444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.023 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.281 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.281 "name": "Existed_Raid", 00:16:37.281 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:37.281 "strip_size_kb": 64, 00:16:37.281 "state": "configuring", 00:16:37.281 "raid_level": "concat", 00:16:37.281 "superblock": true, 00:16:37.281 "num_base_bdevs": 4, 00:16:37.281 "num_base_bdevs_discovered": 3, 00:16:37.281 "num_base_bdevs_operational": 4, 00:16:37.281 "base_bdevs_list": [ 00:16:37.281 { 00:16:37.281 "name": null, 00:16:37.281 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:37.281 "is_configured": false, 00:16:37.281 "data_offset": 2048, 00:16:37.281 "data_size": 63488 00:16:37.281 }, 00:16:37.281 { 00:16:37.281 "name": "BaseBdev2", 00:16:37.281 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:37.281 "is_configured": true, 00:16:37.281 "data_offset": 2048, 00:16:37.281 "data_size": 63488 00:16:37.281 }, 00:16:37.281 { 00:16:37.281 "name": "BaseBdev3", 00:16:37.281 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:37.281 "is_configured": true, 00:16:37.281 "data_offset": 2048, 00:16:37.281 "data_size": 63488 00:16:37.281 }, 00:16:37.281 { 00:16:37.281 "name": "BaseBdev4", 00:16:37.281 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:37.281 "is_configured": true, 00:16:37.282 "data_offset": 2048, 00:16:37.282 "data_size": 63488 00:16:37.282 } 00:16:37.282 ] 00:16:37.282 }' 00:16:37.282 10:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.282 10:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.540 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.540 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:37.798 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:37.798 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.798 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9194d48d-cb3c-488d-8237-53414dad4400 00:16:38.057 [2024-07-15 10:24:02.807146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:38.057 [2024-07-15 10:24:02.807257] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8e0530 00:16:38.057 [2024-07-15 10:24:02.807266] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:38.057 [2024-07-15 10:24:02.807374] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8dc9f0 00:16:38.057 [2024-07-15 10:24:02.807447] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8e0530 00:16:38.057 [2024-07-15 10:24:02.807453] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x8e0530 00:16:38.057 [2024-07-15 10:24:02.807509] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.057 NewBaseBdev 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:38.057 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.316 10:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:38.575 [ 00:16:38.575 { 00:16:38.575 "name": "NewBaseBdev", 00:16:38.575 "aliases": [ 00:16:38.575 "9194d48d-cb3c-488d-8237-53414dad4400" 00:16:38.575 ], 00:16:38.575 "product_name": "Malloc disk", 00:16:38.575 "block_size": 512, 00:16:38.575 "num_blocks": 65536, 00:16:38.575 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:38.575 "assigned_rate_limits": { 00:16:38.575 "rw_ios_per_sec": 0, 00:16:38.575 "rw_mbytes_per_sec": 0, 00:16:38.575 "r_mbytes_per_sec": 0, 00:16:38.575 "w_mbytes_per_sec": 0 00:16:38.575 }, 00:16:38.575 "claimed": true, 00:16:38.575 "claim_type": "exclusive_write", 00:16:38.575 "zoned": false, 00:16:38.575 "supported_io_types": { 00:16:38.575 "read": true, 00:16:38.575 "write": true, 00:16:38.575 "unmap": true, 00:16:38.575 "flush": true, 00:16:38.575 "reset": true, 00:16:38.575 "nvme_admin": false, 00:16:38.575 "nvme_io": false, 00:16:38.575 "nvme_io_md": false, 00:16:38.575 "write_zeroes": true, 00:16:38.575 "zcopy": true, 00:16:38.575 "get_zone_info": false, 00:16:38.575 "zone_management": false, 00:16:38.575 "zone_append": false, 00:16:38.575 "compare": false, 00:16:38.575 "compare_and_write": false, 00:16:38.575 "abort": true, 00:16:38.575 "seek_hole": false, 00:16:38.575 "seek_data": false, 00:16:38.575 "copy": true, 00:16:38.575 "nvme_iov_md": false 00:16:38.575 }, 00:16:38.575 "memory_domains": [ 00:16:38.575 { 00:16:38.575 "dma_device_id": "system", 00:16:38.575 "dma_device_type": 1 00:16:38.575 }, 00:16:38.575 { 00:16:38.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.575 "dma_device_type": 2 00:16:38.575 } 00:16:38.575 ], 00:16:38.575 "driver_specific": {} 00:16:38.575 } 00:16:38.575 ] 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.575 "name": "Existed_Raid", 00:16:38.575 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:38.575 "strip_size_kb": 64, 00:16:38.575 "state": "online", 00:16:38.575 "raid_level": "concat", 00:16:38.575 "superblock": true, 00:16:38.575 "num_base_bdevs": 4, 00:16:38.575 "num_base_bdevs_discovered": 4, 00:16:38.575 "num_base_bdevs_operational": 4, 00:16:38.575 "base_bdevs_list": [ 00:16:38.575 { 00:16:38.575 "name": "NewBaseBdev", 00:16:38.575 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:38.575 "is_configured": true, 00:16:38.575 "data_offset": 2048, 00:16:38.575 "data_size": 63488 00:16:38.575 }, 00:16:38.575 { 00:16:38.575 "name": "BaseBdev2", 00:16:38.575 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:38.575 "is_configured": true, 00:16:38.575 "data_offset": 2048, 00:16:38.575 "data_size": 63488 00:16:38.575 }, 00:16:38.575 { 00:16:38.575 "name": "BaseBdev3", 00:16:38.575 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:38.575 "is_configured": true, 00:16:38.575 "data_offset": 2048, 00:16:38.575 "data_size": 63488 00:16:38.575 }, 00:16:38.575 { 00:16:38.575 "name": "BaseBdev4", 00:16:38.575 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:38.575 "is_configured": true, 00:16:38.575 "data_offset": 2048, 00:16:38.575 "data_size": 63488 00:16:38.575 } 00:16:38.575 ] 00:16:38.575 }' 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.575 10:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:39.141 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:39.399 [2024-07-15 10:24:03.942318] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.399 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:39.399 "name": "Existed_Raid", 00:16:39.399 "aliases": [ 00:16:39.399 "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a" 00:16:39.399 ], 00:16:39.399 "product_name": "Raid Volume", 00:16:39.399 "block_size": 512, 00:16:39.399 "num_blocks": 253952, 00:16:39.399 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:39.400 "assigned_rate_limits": { 00:16:39.400 "rw_ios_per_sec": 0, 00:16:39.400 "rw_mbytes_per_sec": 0, 00:16:39.400 "r_mbytes_per_sec": 0, 00:16:39.400 "w_mbytes_per_sec": 0 00:16:39.400 }, 00:16:39.400 "claimed": false, 00:16:39.400 "zoned": false, 00:16:39.400 "supported_io_types": { 00:16:39.400 "read": true, 00:16:39.400 "write": true, 00:16:39.400 "unmap": true, 00:16:39.400 "flush": true, 00:16:39.400 "reset": true, 00:16:39.400 "nvme_admin": false, 00:16:39.400 "nvme_io": false, 00:16:39.400 "nvme_io_md": false, 00:16:39.400 "write_zeroes": true, 00:16:39.400 "zcopy": false, 00:16:39.400 "get_zone_info": false, 00:16:39.400 "zone_management": false, 00:16:39.400 "zone_append": false, 00:16:39.400 "compare": false, 00:16:39.400 "compare_and_write": false, 00:16:39.400 "abort": false, 00:16:39.400 "seek_hole": false, 00:16:39.400 "seek_data": false, 00:16:39.400 "copy": false, 00:16:39.400 "nvme_iov_md": false 00:16:39.400 }, 00:16:39.400 "memory_domains": [ 00:16:39.400 { 00:16:39.400 "dma_device_id": "system", 00:16:39.400 "dma_device_type": 1 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.400 "dma_device_type": 2 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "system", 00:16:39.400 "dma_device_type": 1 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.400 "dma_device_type": 2 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "system", 00:16:39.400 "dma_device_type": 1 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.400 "dma_device_type": 2 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "system", 00:16:39.400 "dma_device_type": 1 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.400 "dma_device_type": 2 00:16:39.400 } 00:16:39.400 ], 00:16:39.400 "driver_specific": { 00:16:39.400 "raid": { 00:16:39.400 "uuid": "298e31b8-0bd8-4cb2-b061-4d6db4e43b2a", 00:16:39.400 "strip_size_kb": 64, 00:16:39.400 "state": "online", 00:16:39.400 "raid_level": "concat", 00:16:39.400 "superblock": true, 00:16:39.400 "num_base_bdevs": 4, 00:16:39.400 "num_base_bdevs_discovered": 4, 00:16:39.400 "num_base_bdevs_operational": 4, 00:16:39.400 "base_bdevs_list": [ 00:16:39.400 { 00:16:39.400 "name": "NewBaseBdev", 00:16:39.400 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:39.400 "is_configured": true, 00:16:39.400 "data_offset": 2048, 00:16:39.400 "data_size": 63488 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "name": "BaseBdev2", 00:16:39.400 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:39.400 "is_configured": true, 00:16:39.400 "data_offset": 2048, 00:16:39.400 "data_size": 63488 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "name": "BaseBdev3", 00:16:39.400 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:39.400 "is_configured": true, 00:16:39.400 "data_offset": 2048, 00:16:39.400 "data_size": 63488 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "name": "BaseBdev4", 00:16:39.400 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:39.400 "is_configured": true, 00:16:39.400 "data_offset": 2048, 00:16:39.400 "data_size": 63488 00:16:39.400 } 00:16:39.400 ] 00:16:39.400 } 00:16:39.400 } 00:16:39.400 }' 00:16:39.400 10:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.400 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:39.400 BaseBdev2 00:16:39.400 BaseBdev3 00:16:39.400 BaseBdev4' 00:16:39.400 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.400 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:39.400 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.400 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.400 "name": "NewBaseBdev", 00:16:39.400 "aliases": [ 00:16:39.400 "9194d48d-cb3c-488d-8237-53414dad4400" 00:16:39.400 ], 00:16:39.400 "product_name": "Malloc disk", 00:16:39.400 "block_size": 512, 00:16:39.400 "num_blocks": 65536, 00:16:39.400 "uuid": "9194d48d-cb3c-488d-8237-53414dad4400", 00:16:39.400 "assigned_rate_limits": { 00:16:39.400 "rw_ios_per_sec": 0, 00:16:39.400 "rw_mbytes_per_sec": 0, 00:16:39.400 "r_mbytes_per_sec": 0, 00:16:39.400 "w_mbytes_per_sec": 0 00:16:39.400 }, 00:16:39.400 "claimed": true, 00:16:39.400 "claim_type": "exclusive_write", 00:16:39.400 "zoned": false, 00:16:39.400 "supported_io_types": { 00:16:39.400 "read": true, 00:16:39.400 "write": true, 00:16:39.400 "unmap": true, 00:16:39.400 "flush": true, 00:16:39.400 "reset": true, 00:16:39.400 "nvme_admin": false, 00:16:39.400 "nvme_io": false, 00:16:39.400 "nvme_io_md": false, 00:16:39.400 "write_zeroes": true, 00:16:39.400 "zcopy": true, 00:16:39.400 "get_zone_info": false, 00:16:39.400 "zone_management": false, 00:16:39.400 "zone_append": false, 00:16:39.400 "compare": false, 00:16:39.400 "compare_and_write": false, 00:16:39.400 "abort": true, 00:16:39.400 "seek_hole": false, 00:16:39.400 "seek_data": false, 00:16:39.400 "copy": true, 00:16:39.400 "nvme_iov_md": false 00:16:39.400 }, 00:16:39.400 "memory_domains": [ 00:16:39.400 { 00:16:39.400 "dma_device_id": "system", 00:16:39.400 "dma_device_type": 1 00:16:39.400 }, 00:16:39.400 { 00:16:39.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.400 "dma_device_type": 2 00:16:39.400 } 00:16:39.400 ], 00:16:39.400 "driver_specific": {} 00:16:39.400 }' 00:16:39.400 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:39.695 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.953 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.953 "name": "BaseBdev2", 00:16:39.953 "aliases": [ 00:16:39.953 "02602889-190d-4488-9d47-e80184159062" 00:16:39.953 ], 00:16:39.953 "product_name": "Malloc disk", 00:16:39.953 "block_size": 512, 00:16:39.953 "num_blocks": 65536, 00:16:39.953 "uuid": "02602889-190d-4488-9d47-e80184159062", 00:16:39.953 "assigned_rate_limits": { 00:16:39.953 "rw_ios_per_sec": 0, 00:16:39.953 "rw_mbytes_per_sec": 0, 00:16:39.953 "r_mbytes_per_sec": 0, 00:16:39.953 "w_mbytes_per_sec": 0 00:16:39.953 }, 00:16:39.953 "claimed": true, 00:16:39.953 "claim_type": "exclusive_write", 00:16:39.953 "zoned": false, 00:16:39.953 "supported_io_types": { 00:16:39.953 "read": true, 00:16:39.953 "write": true, 00:16:39.953 "unmap": true, 00:16:39.953 "flush": true, 00:16:39.953 "reset": true, 00:16:39.953 "nvme_admin": false, 00:16:39.953 "nvme_io": false, 00:16:39.953 "nvme_io_md": false, 00:16:39.953 "write_zeroes": true, 00:16:39.953 "zcopy": true, 00:16:39.953 "get_zone_info": false, 00:16:39.953 "zone_management": false, 00:16:39.953 "zone_append": false, 00:16:39.953 "compare": false, 00:16:39.953 "compare_and_write": false, 00:16:39.953 "abort": true, 00:16:39.953 "seek_hole": false, 00:16:39.953 "seek_data": false, 00:16:39.953 "copy": true, 00:16:39.953 "nvme_iov_md": false 00:16:39.953 }, 00:16:39.953 "memory_domains": [ 00:16:39.953 { 00:16:39.953 "dma_device_id": "system", 00:16:39.954 "dma_device_type": 1 00:16:39.954 }, 00:16:39.954 { 00:16:39.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.954 "dma_device_type": 2 00:16:39.954 } 00:16:39.954 ], 00:16:39.954 "driver_specific": {} 00:16:39.954 }' 00:16:39.954 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.954 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.954 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.954 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.954 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:40.212 10:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:40.471 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:40.471 "name": "BaseBdev3", 00:16:40.471 "aliases": [ 00:16:40.471 "99948a44-a846-4bec-a700-43fc6f192c06" 00:16:40.471 ], 00:16:40.471 "product_name": "Malloc disk", 00:16:40.471 "block_size": 512, 00:16:40.471 "num_blocks": 65536, 00:16:40.471 "uuid": "99948a44-a846-4bec-a700-43fc6f192c06", 00:16:40.471 "assigned_rate_limits": { 00:16:40.471 "rw_ios_per_sec": 0, 00:16:40.471 "rw_mbytes_per_sec": 0, 00:16:40.471 "r_mbytes_per_sec": 0, 00:16:40.471 "w_mbytes_per_sec": 0 00:16:40.471 }, 00:16:40.471 "claimed": true, 00:16:40.471 "claim_type": "exclusive_write", 00:16:40.471 "zoned": false, 00:16:40.471 "supported_io_types": { 00:16:40.471 "read": true, 00:16:40.471 "write": true, 00:16:40.471 "unmap": true, 00:16:40.471 "flush": true, 00:16:40.471 "reset": true, 00:16:40.471 "nvme_admin": false, 00:16:40.471 "nvme_io": false, 00:16:40.471 "nvme_io_md": false, 00:16:40.471 "write_zeroes": true, 00:16:40.471 "zcopy": true, 00:16:40.471 "get_zone_info": false, 00:16:40.471 "zone_management": false, 00:16:40.471 "zone_append": false, 00:16:40.471 "compare": false, 00:16:40.471 "compare_and_write": false, 00:16:40.471 "abort": true, 00:16:40.471 "seek_hole": false, 00:16:40.471 "seek_data": false, 00:16:40.471 "copy": true, 00:16:40.471 "nvme_iov_md": false 00:16:40.471 }, 00:16:40.471 "memory_domains": [ 00:16:40.471 { 00:16:40.471 "dma_device_id": "system", 00:16:40.471 "dma_device_type": 1 00:16:40.471 }, 00:16:40.471 { 00:16:40.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.471 "dma_device_type": 2 00:16:40.471 } 00:16:40.471 ], 00:16:40.471 "driver_specific": {} 00:16:40.471 }' 00:16:40.471 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.471 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.471 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:40.471 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.471 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:40.730 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:40.989 "name": "BaseBdev4", 00:16:40.989 "aliases": [ 00:16:40.989 "025e2288-dbef-4a5f-99c7-13fe51f0e895" 00:16:40.989 ], 00:16:40.989 "product_name": "Malloc disk", 00:16:40.989 "block_size": 512, 00:16:40.989 "num_blocks": 65536, 00:16:40.989 "uuid": "025e2288-dbef-4a5f-99c7-13fe51f0e895", 00:16:40.989 "assigned_rate_limits": { 00:16:40.989 "rw_ios_per_sec": 0, 00:16:40.989 "rw_mbytes_per_sec": 0, 00:16:40.989 "r_mbytes_per_sec": 0, 00:16:40.989 "w_mbytes_per_sec": 0 00:16:40.989 }, 00:16:40.989 "claimed": true, 00:16:40.989 "claim_type": "exclusive_write", 00:16:40.989 "zoned": false, 00:16:40.989 "supported_io_types": { 00:16:40.989 "read": true, 00:16:40.989 "write": true, 00:16:40.989 "unmap": true, 00:16:40.989 "flush": true, 00:16:40.989 "reset": true, 00:16:40.989 "nvme_admin": false, 00:16:40.989 "nvme_io": false, 00:16:40.989 "nvme_io_md": false, 00:16:40.989 "write_zeroes": true, 00:16:40.989 "zcopy": true, 00:16:40.989 "get_zone_info": false, 00:16:40.989 "zone_management": false, 00:16:40.989 "zone_append": false, 00:16:40.989 "compare": false, 00:16:40.989 "compare_and_write": false, 00:16:40.989 "abort": true, 00:16:40.989 "seek_hole": false, 00:16:40.989 "seek_data": false, 00:16:40.989 "copy": true, 00:16:40.989 "nvme_iov_md": false 00:16:40.989 }, 00:16:40.989 "memory_domains": [ 00:16:40.989 { 00:16:40.989 "dma_device_id": "system", 00:16:40.989 "dma_device_type": 1 00:16:40.989 }, 00:16:40.989 { 00:16:40.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.989 "dma_device_type": 2 00:16:40.989 } 00:16:40.989 ], 00:16:40.989 "driver_specific": {} 00:16:40.989 }' 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:40.989 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.248 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.248 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.248 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.248 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.248 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.248 10:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:41.508 [2024-07-15 10:24:06.051541] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.508 [2024-07-15 10:24:06.051561] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.508 [2024-07-15 10:24:06.051597] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.508 [2024-07-15 10:24:06.051637] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.508 [2024-07-15 10:24:06.051644] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8e0530 name Existed_Raid, state offline 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1822993 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1822993 ']' 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1822993 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1822993 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1822993' 00:16:41.508 killing process with pid 1822993 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1822993 00:16:41.508 [2024-07-15 10:24:06.120973] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.508 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1822993 00:16:41.508 [2024-07-15 10:24:06.152057] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:41.768 10:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:41.768 00:16:41.768 real 0m24.212s 00:16:41.768 user 0m44.152s 00:16:41.768 sys 0m4.693s 00:16:41.768 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.768 10:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.768 ************************************ 00:16:41.768 END TEST raid_state_function_test_sb 00:16:41.768 ************************************ 00:16:41.768 10:24:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:41.768 10:24:06 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:41.768 10:24:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:41.768 10:24:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.768 10:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:41.768 ************************************ 00:16:41.768 START TEST raid_superblock_test 00:16:41.768 ************************************ 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1828191 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1828191 /var/tmp/spdk-raid.sock 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1828191 ']' 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:41.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.768 10:24:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.768 [2024-07-15 10:24:06.453307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:41.768 [2024-07-15 10:24:06.453349] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828191 ] 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:41.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.768 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:41.769 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:41.769 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:41.769 [2024-07-15 10:24:06.545101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.028 [2024-07-15 10:24:06.620922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.028 [2024-07-15 10:24:06.678019] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.028 [2024-07-15 10:24:06.678042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.596 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:42.854 malloc1 00:16:42.854 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.854 [2024-07-15 10:24:07.566280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.855 [2024-07-15 10:24:07.566315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.855 [2024-07-15 10:24:07.566327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7dc2f0 00:16:42.855 [2024-07-15 10:24:07.566335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.855 [2024-07-15 10:24:07.567411] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.855 [2024-07-15 10:24:07.567430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.855 pt1 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:42.855 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:43.113 malloc2 00:16:43.113 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.113 [2024-07-15 10:24:07.882682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.113 [2024-07-15 10:24:07.882716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.113 [2024-07-15 10:24:07.882727] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7dd6d0 00:16:43.113 [2024-07-15 10:24:07.882736] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.113 [2024-07-15 10:24:07.883768] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.113 [2024-07-15 10:24:07.883790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.113 pt2 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.114 10:24:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:43.372 malloc3 00:16:43.372 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.631 [2024-07-15 10:24:08.199098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.631 [2024-07-15 10:24:08.199131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.631 [2024-07-15 10:24:08.199143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9766b0 00:16:43.631 [2024-07-15 10:24:08.199151] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.631 [2024-07-15 10:24:08.200121] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.631 [2024-07-15 10:24:08.200142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.631 pt3 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:43.631 malloc4 00:16:43.631 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:43.890 [2024-07-15 10:24:08.547686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:43.890 [2024-07-15 10:24:08.547721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.890 [2024-07-15 10:24:08.547737] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x974370 00:16:43.890 [2024-07-15 10:24:08.547746] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.890 [2024-07-15 10:24:08.548791] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.890 [2024-07-15 10:24:08.548814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:43.890 pt4 00:16:43.890 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:43.890 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:43.890 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:44.149 [2024-07-15 10:24:08.704099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.149 [2024-07-15 10:24:08.704883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.149 [2024-07-15 10:24:08.704941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.149 [2024-07-15 10:24:08.704972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:44.149 [2024-07-15 10:24:08.705081] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x7d5560 00:16:44.149 [2024-07-15 10:24:08.705088] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:44.149 [2024-07-15 10:24:08.705216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x975760 00:16:44.149 [2024-07-15 10:24:08.705310] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7d5560 00:16:44.149 [2024-07-15 10:24:08.705317] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7d5560 00:16:44.149 [2024-07-15 10:24:08.705378] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.149 "name": "raid_bdev1", 00:16:44.149 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:44.149 "strip_size_kb": 64, 00:16:44.149 "state": "online", 00:16:44.149 "raid_level": "concat", 00:16:44.149 "superblock": true, 00:16:44.149 "num_base_bdevs": 4, 00:16:44.149 "num_base_bdevs_discovered": 4, 00:16:44.149 "num_base_bdevs_operational": 4, 00:16:44.149 "base_bdevs_list": [ 00:16:44.149 { 00:16:44.149 "name": "pt1", 00:16:44.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.149 "is_configured": true, 00:16:44.149 "data_offset": 2048, 00:16:44.149 "data_size": 63488 00:16:44.149 }, 00:16:44.149 { 00:16:44.149 "name": "pt2", 00:16:44.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.149 "is_configured": true, 00:16:44.149 "data_offset": 2048, 00:16:44.149 "data_size": 63488 00:16:44.149 }, 00:16:44.149 { 00:16:44.149 "name": "pt3", 00:16:44.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.149 "is_configured": true, 00:16:44.149 "data_offset": 2048, 00:16:44.149 "data_size": 63488 00:16:44.149 }, 00:16:44.149 { 00:16:44.149 "name": "pt4", 00:16:44.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.149 "is_configured": true, 00:16:44.149 "data_offset": 2048, 00:16:44.149 "data_size": 63488 00:16:44.149 } 00:16:44.149 ] 00:16:44.149 }' 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.149 10:24:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:44.714 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:44.971 [2024-07-15 10:24:09.518520] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.971 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:44.971 "name": "raid_bdev1", 00:16:44.971 "aliases": [ 00:16:44.971 "122d68b3-da13-4b48-8110-6cc8fba8b82b" 00:16:44.971 ], 00:16:44.971 "product_name": "Raid Volume", 00:16:44.971 "block_size": 512, 00:16:44.971 "num_blocks": 253952, 00:16:44.971 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:44.971 "assigned_rate_limits": { 00:16:44.971 "rw_ios_per_sec": 0, 00:16:44.971 "rw_mbytes_per_sec": 0, 00:16:44.971 "r_mbytes_per_sec": 0, 00:16:44.971 "w_mbytes_per_sec": 0 00:16:44.971 }, 00:16:44.971 "claimed": false, 00:16:44.971 "zoned": false, 00:16:44.971 "supported_io_types": { 00:16:44.971 "read": true, 00:16:44.971 "write": true, 00:16:44.971 "unmap": true, 00:16:44.971 "flush": true, 00:16:44.971 "reset": true, 00:16:44.971 "nvme_admin": false, 00:16:44.971 "nvme_io": false, 00:16:44.971 "nvme_io_md": false, 00:16:44.971 "write_zeroes": true, 00:16:44.971 "zcopy": false, 00:16:44.971 "get_zone_info": false, 00:16:44.971 "zone_management": false, 00:16:44.971 "zone_append": false, 00:16:44.971 "compare": false, 00:16:44.971 "compare_and_write": false, 00:16:44.971 "abort": false, 00:16:44.971 "seek_hole": false, 00:16:44.971 "seek_data": false, 00:16:44.971 "copy": false, 00:16:44.971 "nvme_iov_md": false 00:16:44.971 }, 00:16:44.971 "memory_domains": [ 00:16:44.971 { 00:16:44.971 "dma_device_id": "system", 00:16:44.971 "dma_device_type": 1 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.971 "dma_device_type": 2 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "system", 00:16:44.971 "dma_device_type": 1 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.971 "dma_device_type": 2 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "system", 00:16:44.971 "dma_device_type": 1 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.971 "dma_device_type": 2 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "system", 00:16:44.971 "dma_device_type": 1 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.971 "dma_device_type": 2 00:16:44.971 } 00:16:44.971 ], 00:16:44.971 "driver_specific": { 00:16:44.971 "raid": { 00:16:44.971 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:44.971 "strip_size_kb": 64, 00:16:44.971 "state": "online", 00:16:44.971 "raid_level": "concat", 00:16:44.971 "superblock": true, 00:16:44.971 "num_base_bdevs": 4, 00:16:44.971 "num_base_bdevs_discovered": 4, 00:16:44.971 "num_base_bdevs_operational": 4, 00:16:44.971 "base_bdevs_list": [ 00:16:44.971 { 00:16:44.971 "name": "pt1", 00:16:44.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.971 "is_configured": true, 00:16:44.971 "data_offset": 2048, 00:16:44.971 "data_size": 63488 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "name": "pt2", 00:16:44.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.971 "is_configured": true, 00:16:44.971 "data_offset": 2048, 00:16:44.971 "data_size": 63488 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "name": "pt3", 00:16:44.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.971 "is_configured": true, 00:16:44.971 "data_offset": 2048, 00:16:44.971 "data_size": 63488 00:16:44.971 }, 00:16:44.971 { 00:16:44.971 "name": "pt4", 00:16:44.971 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.972 "is_configured": true, 00:16:44.972 "data_offset": 2048, 00:16:44.972 "data_size": 63488 00:16:44.972 } 00:16:44.972 ] 00:16:44.972 } 00:16:44.972 } 00:16:44.972 }' 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:44.972 pt2 00:16:44.972 pt3 00:16:44.972 pt4' 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.972 "name": "pt1", 00:16:44.972 "aliases": [ 00:16:44.972 "00000000-0000-0000-0000-000000000001" 00:16:44.972 ], 00:16:44.972 "product_name": "passthru", 00:16:44.972 "block_size": 512, 00:16:44.972 "num_blocks": 65536, 00:16:44.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.972 "assigned_rate_limits": { 00:16:44.972 "rw_ios_per_sec": 0, 00:16:44.972 "rw_mbytes_per_sec": 0, 00:16:44.972 "r_mbytes_per_sec": 0, 00:16:44.972 "w_mbytes_per_sec": 0 00:16:44.972 }, 00:16:44.972 "claimed": true, 00:16:44.972 "claim_type": "exclusive_write", 00:16:44.972 "zoned": false, 00:16:44.972 "supported_io_types": { 00:16:44.972 "read": true, 00:16:44.972 "write": true, 00:16:44.972 "unmap": true, 00:16:44.972 "flush": true, 00:16:44.972 "reset": true, 00:16:44.972 "nvme_admin": false, 00:16:44.972 "nvme_io": false, 00:16:44.972 "nvme_io_md": false, 00:16:44.972 "write_zeroes": true, 00:16:44.972 "zcopy": true, 00:16:44.972 "get_zone_info": false, 00:16:44.972 "zone_management": false, 00:16:44.972 "zone_append": false, 00:16:44.972 "compare": false, 00:16:44.972 "compare_and_write": false, 00:16:44.972 "abort": true, 00:16:44.972 "seek_hole": false, 00:16:44.972 "seek_data": false, 00:16:44.972 "copy": true, 00:16:44.972 "nvme_iov_md": false 00:16:44.972 }, 00:16:44.972 "memory_domains": [ 00:16:44.972 { 00:16:44.972 "dma_device_id": "system", 00:16:44.972 "dma_device_type": 1 00:16:44.972 }, 00:16:44.972 { 00:16:44.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.972 "dma_device_type": 2 00:16:44.972 } 00:16:44.972 ], 00:16:44.972 "driver_specific": { 00:16:44.972 "passthru": { 00:16:44.972 "name": "pt1", 00:16:44.972 "base_bdev_name": "malloc1" 00:16:44.972 } 00:16:44.972 } 00:16:44.972 }' 00:16:44.972 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.228 10:24:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:45.486 "name": "pt2", 00:16:45.486 "aliases": [ 00:16:45.486 "00000000-0000-0000-0000-000000000002" 00:16:45.486 ], 00:16:45.486 "product_name": "passthru", 00:16:45.486 "block_size": 512, 00:16:45.486 "num_blocks": 65536, 00:16:45.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.486 "assigned_rate_limits": { 00:16:45.486 "rw_ios_per_sec": 0, 00:16:45.486 "rw_mbytes_per_sec": 0, 00:16:45.486 "r_mbytes_per_sec": 0, 00:16:45.486 "w_mbytes_per_sec": 0 00:16:45.486 }, 00:16:45.486 "claimed": true, 00:16:45.486 "claim_type": "exclusive_write", 00:16:45.486 "zoned": false, 00:16:45.486 "supported_io_types": { 00:16:45.486 "read": true, 00:16:45.486 "write": true, 00:16:45.486 "unmap": true, 00:16:45.486 "flush": true, 00:16:45.486 "reset": true, 00:16:45.486 "nvme_admin": false, 00:16:45.486 "nvme_io": false, 00:16:45.486 "nvme_io_md": false, 00:16:45.486 "write_zeroes": true, 00:16:45.486 "zcopy": true, 00:16:45.486 "get_zone_info": false, 00:16:45.486 "zone_management": false, 00:16:45.486 "zone_append": false, 00:16:45.486 "compare": false, 00:16:45.486 "compare_and_write": false, 00:16:45.486 "abort": true, 00:16:45.486 "seek_hole": false, 00:16:45.486 "seek_data": false, 00:16:45.486 "copy": true, 00:16:45.486 "nvme_iov_md": false 00:16:45.486 }, 00:16:45.486 "memory_domains": [ 00:16:45.486 { 00:16:45.486 "dma_device_id": "system", 00:16:45.486 "dma_device_type": 1 00:16:45.486 }, 00:16:45.486 { 00:16:45.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.486 "dma_device_type": 2 00:16:45.486 } 00:16:45.486 ], 00:16:45.486 "driver_specific": { 00:16:45.486 "passthru": { 00:16:45.486 "name": "pt2", 00:16:45.486 "base_bdev_name": "malloc2" 00:16:45.486 } 00:16:45.486 } 00:16:45.486 }' 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.486 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.744 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.002 "name": "pt3", 00:16:46.002 "aliases": [ 00:16:46.002 "00000000-0000-0000-0000-000000000003" 00:16:46.002 ], 00:16:46.002 "product_name": "passthru", 00:16:46.002 "block_size": 512, 00:16:46.002 "num_blocks": 65536, 00:16:46.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.002 "assigned_rate_limits": { 00:16:46.002 "rw_ios_per_sec": 0, 00:16:46.002 "rw_mbytes_per_sec": 0, 00:16:46.002 "r_mbytes_per_sec": 0, 00:16:46.002 "w_mbytes_per_sec": 0 00:16:46.002 }, 00:16:46.002 "claimed": true, 00:16:46.002 "claim_type": "exclusive_write", 00:16:46.002 "zoned": false, 00:16:46.002 "supported_io_types": { 00:16:46.002 "read": true, 00:16:46.002 "write": true, 00:16:46.002 "unmap": true, 00:16:46.002 "flush": true, 00:16:46.002 "reset": true, 00:16:46.002 "nvme_admin": false, 00:16:46.002 "nvme_io": false, 00:16:46.002 "nvme_io_md": false, 00:16:46.002 "write_zeroes": true, 00:16:46.002 "zcopy": true, 00:16:46.002 "get_zone_info": false, 00:16:46.002 "zone_management": false, 00:16:46.002 "zone_append": false, 00:16:46.002 "compare": false, 00:16:46.002 "compare_and_write": false, 00:16:46.002 "abort": true, 00:16:46.002 "seek_hole": false, 00:16:46.002 "seek_data": false, 00:16:46.002 "copy": true, 00:16:46.002 "nvme_iov_md": false 00:16:46.002 }, 00:16:46.002 "memory_domains": [ 00:16:46.002 { 00:16:46.002 "dma_device_id": "system", 00:16:46.002 "dma_device_type": 1 00:16:46.002 }, 00:16:46.002 { 00:16:46.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.002 "dma_device_type": 2 00:16:46.002 } 00:16:46.002 ], 00:16:46.002 "driver_specific": { 00:16:46.002 "passthru": { 00:16:46.002 "name": "pt3", 00:16:46.002 "base_bdev_name": "malloc3" 00:16:46.002 } 00:16:46.002 } 00:16:46.002 }' 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.002 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:46.259 10:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.516 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.516 "name": "pt4", 00:16:46.516 "aliases": [ 00:16:46.516 "00000000-0000-0000-0000-000000000004" 00:16:46.516 ], 00:16:46.516 "product_name": "passthru", 00:16:46.516 "block_size": 512, 00:16:46.516 "num_blocks": 65536, 00:16:46.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:46.516 "assigned_rate_limits": { 00:16:46.516 "rw_ios_per_sec": 0, 00:16:46.516 "rw_mbytes_per_sec": 0, 00:16:46.516 "r_mbytes_per_sec": 0, 00:16:46.516 "w_mbytes_per_sec": 0 00:16:46.516 }, 00:16:46.516 "claimed": true, 00:16:46.516 "claim_type": "exclusive_write", 00:16:46.516 "zoned": false, 00:16:46.516 "supported_io_types": { 00:16:46.516 "read": true, 00:16:46.517 "write": true, 00:16:46.517 "unmap": true, 00:16:46.517 "flush": true, 00:16:46.517 "reset": true, 00:16:46.517 "nvme_admin": false, 00:16:46.517 "nvme_io": false, 00:16:46.517 "nvme_io_md": false, 00:16:46.517 "write_zeroes": true, 00:16:46.517 "zcopy": true, 00:16:46.517 "get_zone_info": false, 00:16:46.517 "zone_management": false, 00:16:46.517 "zone_append": false, 00:16:46.517 "compare": false, 00:16:46.517 "compare_and_write": false, 00:16:46.517 "abort": true, 00:16:46.517 "seek_hole": false, 00:16:46.517 "seek_data": false, 00:16:46.517 "copy": true, 00:16:46.517 "nvme_iov_md": false 00:16:46.517 }, 00:16:46.517 "memory_domains": [ 00:16:46.517 { 00:16:46.517 "dma_device_id": "system", 00:16:46.517 "dma_device_type": 1 00:16:46.517 }, 00:16:46.517 { 00:16:46.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.517 "dma_device_type": 2 00:16:46.517 } 00:16:46.517 ], 00:16:46.517 "driver_specific": { 00:16:46.517 "passthru": { 00:16:46.517 "name": "pt4", 00:16:46.517 "base_bdev_name": "malloc4" 00:16:46.517 } 00:16:46.517 } 00:16:46.517 }' 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.517 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:46.774 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:47.032 [2024-07-15 10:24:11.607927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.032 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=122d68b3-da13-4b48-8110-6cc8fba8b82b 00:16:47.032 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 122d68b3-da13-4b48-8110-6cc8fba8b82b ']' 00:16:47.032 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:47.032 [2024-07-15 10:24:11.780164] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.032 [2024-07-15 10:24:11.780177] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.032 [2024-07-15 10:24:11.780211] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.032 [2024-07-15 10:24:11.780257] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.032 [2024-07-15 10:24:11.780264] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7d5560 name raid_bdev1, state offline 00:16:47.032 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.032 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:47.289 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:47.289 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:47.289 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.289 10:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:47.547 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.547 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:47.547 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.547 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:47.806 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:47.806 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:48.064 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:48.322 [2024-07-15 10:24:12.935112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:48.322 [2024-07-15 10:24:12.936076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:48.322 [2024-07-15 10:24:12.936106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:48.322 [2024-07-15 10:24:12.936128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:48.322 [2024-07-15 10:24:12.936159] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:48.322 [2024-07-15 10:24:12.936188] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:48.322 [2024-07-15 10:24:12.936202] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:48.322 [2024-07-15 10:24:12.936216] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:48.322 [2024-07-15 10:24:12.936228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.322 [2024-07-15 10:24:12.936234] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x97fd50 name raid_bdev1, state configuring 00:16:48.322 request: 00:16:48.322 { 00:16:48.322 "name": "raid_bdev1", 00:16:48.322 "raid_level": "concat", 00:16:48.322 "base_bdevs": [ 00:16:48.322 "malloc1", 00:16:48.322 "malloc2", 00:16:48.322 "malloc3", 00:16:48.322 "malloc4" 00:16:48.322 ], 00:16:48.322 "strip_size_kb": 64, 00:16:48.322 "superblock": false, 00:16:48.322 "method": "bdev_raid_create", 00:16:48.322 "req_id": 1 00:16:48.322 } 00:16:48.322 Got JSON-RPC error response 00:16:48.322 response: 00:16:48.322 { 00:16:48.322 "code": -17, 00:16:48.322 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:48.323 } 00:16:48.323 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:48.323 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:48.323 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:48.323 10:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:48.323 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.323 10:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:48.323 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:48.323 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:48.323 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.581 [2024-07-15 10:24:13.255912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.581 [2024-07-15 10:24:13.255946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.581 [2024-07-15 10:24:13.255959] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x97f3f0 00:16:48.581 [2024-07-15 10:24:13.255984] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.581 [2024-07-15 10:24:13.257116] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.581 [2024-07-15 10:24:13.257138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.581 [2024-07-15 10:24:13.257184] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:48.581 [2024-07-15 10:24:13.257201] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.581 pt1 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.581 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.839 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.839 "name": "raid_bdev1", 00:16:48.839 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:48.839 "strip_size_kb": 64, 00:16:48.839 "state": "configuring", 00:16:48.839 "raid_level": "concat", 00:16:48.839 "superblock": true, 00:16:48.839 "num_base_bdevs": 4, 00:16:48.839 "num_base_bdevs_discovered": 1, 00:16:48.839 "num_base_bdevs_operational": 4, 00:16:48.839 "base_bdevs_list": [ 00:16:48.839 { 00:16:48.839 "name": "pt1", 00:16:48.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.839 "is_configured": true, 00:16:48.839 "data_offset": 2048, 00:16:48.839 "data_size": 63488 00:16:48.839 }, 00:16:48.839 { 00:16:48.839 "name": null, 00:16:48.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.839 "is_configured": false, 00:16:48.839 "data_offset": 2048, 00:16:48.839 "data_size": 63488 00:16:48.839 }, 00:16:48.839 { 00:16:48.839 "name": null, 00:16:48.839 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.839 "is_configured": false, 00:16:48.839 "data_offset": 2048, 00:16:48.839 "data_size": 63488 00:16:48.839 }, 00:16:48.839 { 00:16:48.839 "name": null, 00:16:48.839 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.839 "is_configured": false, 00:16:48.839 "data_offset": 2048, 00:16:48.839 "data_size": 63488 00:16:48.839 } 00:16:48.839 ] 00:16:48.839 }' 00:16:48.839 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.839 10:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.404 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:49.404 10:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.404 [2024-07-15 10:24:14.078034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.404 [2024-07-15 10:24:14.078071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.404 [2024-07-15 10:24:14.078085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7d40e0 00:16:49.404 [2024-07-15 10:24:14.078093] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.404 [2024-07-15 10:24:14.078321] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.404 [2024-07-15 10:24:14.078332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.404 [2024-07-15 10:24:14.078374] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.404 [2024-07-15 10:24:14.078386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.404 pt2 00:16:49.404 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:49.663 [2024-07-15 10:24:14.250482] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.663 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.921 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.921 "name": "raid_bdev1", 00:16:49.921 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:49.921 "strip_size_kb": 64, 00:16:49.921 "state": "configuring", 00:16:49.922 "raid_level": "concat", 00:16:49.922 "superblock": true, 00:16:49.922 "num_base_bdevs": 4, 00:16:49.922 "num_base_bdevs_discovered": 1, 00:16:49.922 "num_base_bdevs_operational": 4, 00:16:49.922 "base_bdevs_list": [ 00:16:49.922 { 00:16:49.922 "name": "pt1", 00:16:49.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.922 "is_configured": true, 00:16:49.922 "data_offset": 2048, 00:16:49.922 "data_size": 63488 00:16:49.922 }, 00:16:49.922 { 00:16:49.922 "name": null, 00:16:49.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.922 "is_configured": false, 00:16:49.922 "data_offset": 2048, 00:16:49.922 "data_size": 63488 00:16:49.922 }, 00:16:49.922 { 00:16:49.922 "name": null, 00:16:49.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.922 "is_configured": false, 00:16:49.922 "data_offset": 2048, 00:16:49.922 "data_size": 63488 00:16:49.922 }, 00:16:49.922 { 00:16:49.922 "name": null, 00:16:49.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.922 "is_configured": false, 00:16:49.922 "data_offset": 2048, 00:16:49.922 "data_size": 63488 00:16:49.922 } 00:16:49.922 ] 00:16:49.922 }' 00:16:49.922 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.922 10:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.180 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:50.180 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:50.180 10:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.438 [2024-07-15 10:24:15.108691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.438 [2024-07-15 10:24:15.108732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.438 [2024-07-15 10:24:15.108746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7dc520 00:16:50.438 [2024-07-15 10:24:15.108754] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.438 [2024-07-15 10:24:15.109010] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.438 [2024-07-15 10:24:15.109023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.438 [2024-07-15 10:24:15.109069] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.438 [2024-07-15 10:24:15.109082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.438 pt2 00:16:50.438 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:50.438 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:50.438 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.697 [2024-07-15 10:24:15.277127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.697 [2024-07-15 10:24:15.277159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.697 [2024-07-15 10:24:15.277171] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7d66e0 00:16:50.697 [2024-07-15 10:24:15.277178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.697 [2024-07-15 10:24:15.277395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.697 [2024-07-15 10:24:15.277406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.697 [2024-07-15 10:24:15.277447] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:50.697 [2024-07-15 10:24:15.277459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.697 pt3 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:50.697 [2024-07-15 10:24:15.429521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:50.697 [2024-07-15 10:24:15.429553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.697 [2024-07-15 10:24:15.429565] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7d30f0 00:16:50.697 [2024-07-15 10:24:15.429573] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.697 [2024-07-15 10:24:15.429774] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.697 [2024-07-15 10:24:15.429786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:50.697 [2024-07-15 10:24:15.429825] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:50.697 [2024-07-15 10:24:15.429837] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:50.697 [2024-07-15 10:24:15.429926] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x7d5e40 00:16:50.697 [2024-07-15 10:24:15.429934] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:50.697 [2024-07-15 10:24:15.430045] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x7d3e80 00:16:50.697 [2024-07-15 10:24:15.430134] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x7d5e40 00:16:50.697 [2024-07-15 10:24:15.430140] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x7d5e40 00:16:50.697 [2024-07-15 10:24:15.430204] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.697 pt4 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.697 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.955 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.955 "name": "raid_bdev1", 00:16:50.955 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:50.955 "strip_size_kb": 64, 00:16:50.956 "state": "online", 00:16:50.956 "raid_level": "concat", 00:16:50.956 "superblock": true, 00:16:50.956 "num_base_bdevs": 4, 00:16:50.956 "num_base_bdevs_discovered": 4, 00:16:50.956 "num_base_bdevs_operational": 4, 00:16:50.956 "base_bdevs_list": [ 00:16:50.956 { 00:16:50.956 "name": "pt1", 00:16:50.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 }, 00:16:50.956 { 00:16:50.956 "name": "pt2", 00:16:50.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 }, 00:16:50.956 { 00:16:50.956 "name": "pt3", 00:16:50.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 }, 00:16:50.956 { 00:16:50.956 "name": "pt4", 00:16:50.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.956 "is_configured": true, 00:16:50.956 "data_offset": 2048, 00:16:50.956 "data_size": 63488 00:16:50.956 } 00:16:50.956 ] 00:16:50.956 }' 00:16:50.956 10:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.956 10:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:51.521 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:51.522 [2024-07-15 10:24:16.223745] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:51.522 "name": "raid_bdev1", 00:16:51.522 "aliases": [ 00:16:51.522 "122d68b3-da13-4b48-8110-6cc8fba8b82b" 00:16:51.522 ], 00:16:51.522 "product_name": "Raid Volume", 00:16:51.522 "block_size": 512, 00:16:51.522 "num_blocks": 253952, 00:16:51.522 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:51.522 "assigned_rate_limits": { 00:16:51.522 "rw_ios_per_sec": 0, 00:16:51.522 "rw_mbytes_per_sec": 0, 00:16:51.522 "r_mbytes_per_sec": 0, 00:16:51.522 "w_mbytes_per_sec": 0 00:16:51.522 }, 00:16:51.522 "claimed": false, 00:16:51.522 "zoned": false, 00:16:51.522 "supported_io_types": { 00:16:51.522 "read": true, 00:16:51.522 "write": true, 00:16:51.522 "unmap": true, 00:16:51.522 "flush": true, 00:16:51.522 "reset": true, 00:16:51.522 "nvme_admin": false, 00:16:51.522 "nvme_io": false, 00:16:51.522 "nvme_io_md": false, 00:16:51.522 "write_zeroes": true, 00:16:51.522 "zcopy": false, 00:16:51.522 "get_zone_info": false, 00:16:51.522 "zone_management": false, 00:16:51.522 "zone_append": false, 00:16:51.522 "compare": false, 00:16:51.522 "compare_and_write": false, 00:16:51.522 "abort": false, 00:16:51.522 "seek_hole": false, 00:16:51.522 "seek_data": false, 00:16:51.522 "copy": false, 00:16:51.522 "nvme_iov_md": false 00:16:51.522 }, 00:16:51.522 "memory_domains": [ 00:16:51.522 { 00:16:51.522 "dma_device_id": "system", 00:16:51.522 "dma_device_type": 1 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.522 "dma_device_type": 2 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "system", 00:16:51.522 "dma_device_type": 1 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.522 "dma_device_type": 2 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "system", 00:16:51.522 "dma_device_type": 1 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.522 "dma_device_type": 2 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "system", 00:16:51.522 "dma_device_type": 1 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.522 "dma_device_type": 2 00:16:51.522 } 00:16:51.522 ], 00:16:51.522 "driver_specific": { 00:16:51.522 "raid": { 00:16:51.522 "uuid": "122d68b3-da13-4b48-8110-6cc8fba8b82b", 00:16:51.522 "strip_size_kb": 64, 00:16:51.522 "state": "online", 00:16:51.522 "raid_level": "concat", 00:16:51.522 "superblock": true, 00:16:51.522 "num_base_bdevs": 4, 00:16:51.522 "num_base_bdevs_discovered": 4, 00:16:51.522 "num_base_bdevs_operational": 4, 00:16:51.522 "base_bdevs_list": [ 00:16:51.522 { 00:16:51.522 "name": "pt1", 00:16:51.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.522 "is_configured": true, 00:16:51.522 "data_offset": 2048, 00:16:51.522 "data_size": 63488 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "name": "pt2", 00:16:51.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.522 "is_configured": true, 00:16:51.522 "data_offset": 2048, 00:16:51.522 "data_size": 63488 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "name": "pt3", 00:16:51.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.522 "is_configured": true, 00:16:51.522 "data_offset": 2048, 00:16:51.522 "data_size": 63488 00:16:51.522 }, 00:16:51.522 { 00:16:51.522 "name": "pt4", 00:16:51.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.522 "is_configured": true, 00:16:51.522 "data_offset": 2048, 00:16:51.522 "data_size": 63488 00:16:51.522 } 00:16:51.522 ] 00:16:51.522 } 00:16:51.522 } 00:16:51.522 }' 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:51.522 pt2 00:16:51.522 pt3 00:16:51.522 pt4' 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.522 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:51.780 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.780 "name": "pt1", 00:16:51.780 "aliases": [ 00:16:51.780 "00000000-0000-0000-0000-000000000001" 00:16:51.780 ], 00:16:51.780 "product_name": "passthru", 00:16:51.780 "block_size": 512, 00:16:51.780 "num_blocks": 65536, 00:16:51.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.780 "assigned_rate_limits": { 00:16:51.780 "rw_ios_per_sec": 0, 00:16:51.780 "rw_mbytes_per_sec": 0, 00:16:51.780 "r_mbytes_per_sec": 0, 00:16:51.780 "w_mbytes_per_sec": 0 00:16:51.780 }, 00:16:51.780 "claimed": true, 00:16:51.780 "claim_type": "exclusive_write", 00:16:51.780 "zoned": false, 00:16:51.780 "supported_io_types": { 00:16:51.780 "read": true, 00:16:51.780 "write": true, 00:16:51.780 "unmap": true, 00:16:51.780 "flush": true, 00:16:51.780 "reset": true, 00:16:51.780 "nvme_admin": false, 00:16:51.780 "nvme_io": false, 00:16:51.780 "nvme_io_md": false, 00:16:51.780 "write_zeroes": true, 00:16:51.780 "zcopy": true, 00:16:51.780 "get_zone_info": false, 00:16:51.780 "zone_management": false, 00:16:51.780 "zone_append": false, 00:16:51.780 "compare": false, 00:16:51.780 "compare_and_write": false, 00:16:51.780 "abort": true, 00:16:51.780 "seek_hole": false, 00:16:51.780 "seek_data": false, 00:16:51.780 "copy": true, 00:16:51.780 "nvme_iov_md": false 00:16:51.780 }, 00:16:51.780 "memory_domains": [ 00:16:51.780 { 00:16:51.780 "dma_device_id": "system", 00:16:51.780 "dma_device_type": 1 00:16:51.780 }, 00:16:51.780 { 00:16:51.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.780 "dma_device_type": 2 00:16:51.780 } 00:16:51.780 ], 00:16:51.780 "driver_specific": { 00:16:51.781 "passthru": { 00:16:51.781 "name": "pt1", 00:16:51.781 "base_bdev_name": "malloc1" 00:16:51.781 } 00:16:51.781 } 00:16:51.781 }' 00:16:51.781 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.781 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.781 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.781 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.041 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:52.329 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.329 "name": "pt2", 00:16:52.329 "aliases": [ 00:16:52.329 "00000000-0000-0000-0000-000000000002" 00:16:52.329 ], 00:16:52.329 "product_name": "passthru", 00:16:52.329 "block_size": 512, 00:16:52.329 "num_blocks": 65536, 00:16:52.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.329 "assigned_rate_limits": { 00:16:52.329 "rw_ios_per_sec": 0, 00:16:52.329 "rw_mbytes_per_sec": 0, 00:16:52.329 "r_mbytes_per_sec": 0, 00:16:52.329 "w_mbytes_per_sec": 0 00:16:52.329 }, 00:16:52.329 "claimed": true, 00:16:52.329 "claim_type": "exclusive_write", 00:16:52.329 "zoned": false, 00:16:52.329 "supported_io_types": { 00:16:52.329 "read": true, 00:16:52.329 "write": true, 00:16:52.329 "unmap": true, 00:16:52.329 "flush": true, 00:16:52.329 "reset": true, 00:16:52.329 "nvme_admin": false, 00:16:52.329 "nvme_io": false, 00:16:52.329 "nvme_io_md": false, 00:16:52.329 "write_zeroes": true, 00:16:52.329 "zcopy": true, 00:16:52.329 "get_zone_info": false, 00:16:52.329 "zone_management": false, 00:16:52.329 "zone_append": false, 00:16:52.329 "compare": false, 00:16:52.329 "compare_and_write": false, 00:16:52.329 "abort": true, 00:16:52.329 "seek_hole": false, 00:16:52.329 "seek_data": false, 00:16:52.329 "copy": true, 00:16:52.329 "nvme_iov_md": false 00:16:52.329 }, 00:16:52.329 "memory_domains": [ 00:16:52.329 { 00:16:52.329 "dma_device_id": "system", 00:16:52.329 "dma_device_type": 1 00:16:52.329 }, 00:16:52.329 { 00:16:52.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.329 "dma_device_type": 2 00:16:52.329 } 00:16:52.329 ], 00:16:52.329 "driver_specific": { 00:16:52.329 "passthru": { 00:16:52.329 "name": "pt2", 00:16:52.329 "base_bdev_name": "malloc2" 00:16:52.329 } 00:16:52.329 } 00:16:52.329 }' 00:16:52.329 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.329 10:24:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.329 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.329 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.329 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:52.588 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:52.847 "name": "pt3", 00:16:52.847 "aliases": [ 00:16:52.847 "00000000-0000-0000-0000-000000000003" 00:16:52.847 ], 00:16:52.847 "product_name": "passthru", 00:16:52.847 "block_size": 512, 00:16:52.847 "num_blocks": 65536, 00:16:52.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.847 "assigned_rate_limits": { 00:16:52.847 "rw_ios_per_sec": 0, 00:16:52.847 "rw_mbytes_per_sec": 0, 00:16:52.847 "r_mbytes_per_sec": 0, 00:16:52.847 "w_mbytes_per_sec": 0 00:16:52.847 }, 00:16:52.847 "claimed": true, 00:16:52.847 "claim_type": "exclusive_write", 00:16:52.847 "zoned": false, 00:16:52.847 "supported_io_types": { 00:16:52.847 "read": true, 00:16:52.847 "write": true, 00:16:52.847 "unmap": true, 00:16:52.847 "flush": true, 00:16:52.847 "reset": true, 00:16:52.847 "nvme_admin": false, 00:16:52.847 "nvme_io": false, 00:16:52.847 "nvme_io_md": false, 00:16:52.847 "write_zeroes": true, 00:16:52.847 "zcopy": true, 00:16:52.847 "get_zone_info": false, 00:16:52.847 "zone_management": false, 00:16:52.847 "zone_append": false, 00:16:52.847 "compare": false, 00:16:52.847 "compare_and_write": false, 00:16:52.847 "abort": true, 00:16:52.847 "seek_hole": false, 00:16:52.847 "seek_data": false, 00:16:52.847 "copy": true, 00:16:52.847 "nvme_iov_md": false 00:16:52.847 }, 00:16:52.847 "memory_domains": [ 00:16:52.847 { 00:16:52.847 "dma_device_id": "system", 00:16:52.847 "dma_device_type": 1 00:16:52.847 }, 00:16:52.847 { 00:16:52.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.847 "dma_device_type": 2 00:16:52.847 } 00:16:52.847 ], 00:16:52.847 "driver_specific": { 00:16:52.847 "passthru": { 00:16:52.847 "name": "pt3", 00:16:52.847 "base_bdev_name": "malloc3" 00:16:52.847 } 00:16:52.847 } 00:16:52.847 }' 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:52.847 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:53.105 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.364 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.364 "name": "pt4", 00:16:53.364 "aliases": [ 00:16:53.364 "00000000-0000-0000-0000-000000000004" 00:16:53.364 ], 00:16:53.364 "product_name": "passthru", 00:16:53.364 "block_size": 512, 00:16:53.364 "num_blocks": 65536, 00:16:53.364 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.364 "assigned_rate_limits": { 00:16:53.364 "rw_ios_per_sec": 0, 00:16:53.364 "rw_mbytes_per_sec": 0, 00:16:53.364 "r_mbytes_per_sec": 0, 00:16:53.364 "w_mbytes_per_sec": 0 00:16:53.364 }, 00:16:53.364 "claimed": true, 00:16:53.364 "claim_type": "exclusive_write", 00:16:53.364 "zoned": false, 00:16:53.364 "supported_io_types": { 00:16:53.364 "read": true, 00:16:53.364 "write": true, 00:16:53.364 "unmap": true, 00:16:53.364 "flush": true, 00:16:53.364 "reset": true, 00:16:53.364 "nvme_admin": false, 00:16:53.364 "nvme_io": false, 00:16:53.364 "nvme_io_md": false, 00:16:53.364 "write_zeroes": true, 00:16:53.364 "zcopy": true, 00:16:53.364 "get_zone_info": false, 00:16:53.364 "zone_management": false, 00:16:53.364 "zone_append": false, 00:16:53.364 "compare": false, 00:16:53.364 "compare_and_write": false, 00:16:53.364 "abort": true, 00:16:53.364 "seek_hole": false, 00:16:53.364 "seek_data": false, 00:16:53.364 "copy": true, 00:16:53.364 "nvme_iov_md": false 00:16:53.364 }, 00:16:53.364 "memory_domains": [ 00:16:53.364 { 00:16:53.364 "dma_device_id": "system", 00:16:53.364 "dma_device_type": 1 00:16:53.364 }, 00:16:53.364 { 00:16:53.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.364 "dma_device_type": 2 00:16:53.364 } 00:16:53.364 ], 00:16:53.364 "driver_specific": { 00:16:53.364 "passthru": { 00:16:53.364 "name": "pt4", 00:16:53.364 "base_bdev_name": "malloc4" 00:16:53.364 } 00:16:53.364 } 00:16:53.364 }' 00:16:53.364 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.364 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.364 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.364 10:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.364 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.364 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.364 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.364 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.364 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.364 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.622 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:53.623 [2024-07-15 10:24:18.373276] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 122d68b3-da13-4b48-8110-6cc8fba8b82b '!=' 122d68b3-da13-4b48-8110-6cc8fba8b82b ']' 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1828191 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1828191 ']' 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1828191 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.623 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1828191 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1828191' 00:16:53.881 killing process with pid 1828191 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1828191 00:16:53.881 [2024-07-15 10:24:18.433914] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.881 [2024-07-15 10:24:18.433961] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.881 [2024-07-15 10:24:18.434016] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.881 [2024-07-15 10:24:18.434023] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x7d5e40 name raid_bdev1, state offline 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1828191 00:16:53.881 [2024-07-15 10:24:18.464411] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:53.881 00:16:53.881 real 0m12.229s 00:16:53.881 user 0m21.940s 00:16:53.881 sys 0m2.291s 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:53.881 10:24:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.881 ************************************ 00:16:53.881 END TEST raid_superblock_test 00:16:53.881 ************************************ 00:16:54.139 10:24:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:54.139 10:24:18 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:16:54.139 10:24:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:54.139 10:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.139 10:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.139 ************************************ 00:16:54.139 START TEST raid_read_error_test 00:16:54.139 ************************************ 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.aw5Fu6LxLI 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1830647 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1830647 /var/tmp/spdk-raid.sock 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1830647 ']' 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.139 10:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.139 [2024-07-15 10:24:18.779603] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:54.139 [2024-07-15 10:24:18.779647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830647 ] 00:16:54.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.139 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:54.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.139 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:54.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.139 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:54.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.139 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:54.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.139 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:54.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:54.140 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:54.140 [2024-07-15 10:24:18.870821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.397 [2024-07-15 10:24:18.946278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.397 [2024-07-15 10:24:19.000163] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.397 [2024-07-15 10:24:19.000190] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.963 10:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.963 10:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:54.963 10:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:54.963 10:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:54.963 BaseBdev1_malloc 00:16:54.963 10:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:55.222 true 00:16:55.222 10:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:55.481 [2024-07-15 10:24:20.076914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:55.481 [2024-07-15 10:24:20.076952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.481 [2024-07-15 10:24:20.076966] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb2d190 00:16:55.481 [2024-07-15 10:24:20.076975] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.481 [2024-07-15 10:24:20.078154] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.481 [2024-07-15 10:24:20.078176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.481 BaseBdev1 00:16:55.481 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:55.481 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:55.481 BaseBdev2_malloc 00:16:55.481 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:55.740 true 00:16:55.740 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:55.999 [2024-07-15 10:24:20.569686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:55.999 [2024-07-15 10:24:20.569716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.999 [2024-07-15 10:24:20.569730] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb31e20 00:16:55.999 [2024-07-15 10:24:20.569738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.999 [2024-07-15 10:24:20.570695] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.999 [2024-07-15 10:24:20.570716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:55.999 BaseBdev2 00:16:55.999 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:55.999 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:55.999 BaseBdev3_malloc 00:16:55.999 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:56.258 true 00:16:56.258 10:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:56.517 [2024-07-15 10:24:21.058753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:56.517 [2024-07-15 10:24:21.058785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.517 [2024-07-15 10:24:21.058800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb32d90 00:16:56.517 [2024-07-15 10:24:21.058808] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.517 [2024-07-15 10:24:21.059803] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.517 [2024-07-15 10:24:21.059824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:56.517 BaseBdev3 00:16:56.517 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:56.517 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:56.517 BaseBdev4_malloc 00:16:56.517 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:56.775 true 00:16:56.775 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:57.034 [2024-07-15 10:24:21.571670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:57.035 [2024-07-15 10:24:21.571702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.035 [2024-07-15 10:24:21.571715] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb35000 00:16:57.035 [2024-07-15 10:24:21.571723] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.035 [2024-07-15 10:24:21.572761] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.035 [2024-07-15 10:24:21.572782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:57.035 BaseBdev4 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:57.035 [2024-07-15 10:24:21.736126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.035 [2024-07-15 10:24:21.736962] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.035 [2024-07-15 10:24:21.737008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.035 [2024-07-15 10:24:21.737043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.035 [2024-07-15 10:24:21.737190] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xb35dd0 00:16:57.035 [2024-07-15 10:24:21.737197] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:57.035 [2024-07-15 10:24:21.737323] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb37080 00:16:57.035 [2024-07-15 10:24:21.737416] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb35dd0 00:16:57.035 [2024-07-15 10:24:21.737422] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb35dd0 00:16:57.035 [2024-07-15 10:24:21.737485] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.035 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.293 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.293 "name": "raid_bdev1", 00:16:57.293 "uuid": "5f9ffac4-c72e-4998-900c-e80395d4ea0a", 00:16:57.293 "strip_size_kb": 64, 00:16:57.293 "state": "online", 00:16:57.293 "raid_level": "concat", 00:16:57.293 "superblock": true, 00:16:57.293 "num_base_bdevs": 4, 00:16:57.293 "num_base_bdevs_discovered": 4, 00:16:57.293 "num_base_bdevs_operational": 4, 00:16:57.293 "base_bdevs_list": [ 00:16:57.293 { 00:16:57.293 "name": "BaseBdev1", 00:16:57.293 "uuid": "63f37894-80ed-5c0e-8270-febfb5d3907d", 00:16:57.293 "is_configured": true, 00:16:57.293 "data_offset": 2048, 00:16:57.293 "data_size": 63488 00:16:57.293 }, 00:16:57.293 { 00:16:57.293 "name": "BaseBdev2", 00:16:57.294 "uuid": "bfe365ac-4d33-53ac-8645-84548f9b817c", 00:16:57.294 "is_configured": true, 00:16:57.294 "data_offset": 2048, 00:16:57.294 "data_size": 63488 00:16:57.294 }, 00:16:57.294 { 00:16:57.294 "name": "BaseBdev3", 00:16:57.294 "uuid": "4d095224-a6f2-5181-a51b-84049db284d2", 00:16:57.294 "is_configured": true, 00:16:57.294 "data_offset": 2048, 00:16:57.294 "data_size": 63488 00:16:57.294 }, 00:16:57.294 { 00:16:57.294 "name": "BaseBdev4", 00:16:57.294 "uuid": "658ab778-b8a5-5a8d-974c-ef3c0c196d97", 00:16:57.294 "is_configured": true, 00:16:57.294 "data_offset": 2048, 00:16:57.294 "data_size": 63488 00:16:57.294 } 00:16:57.294 ] 00:16:57.294 }' 00:16:57.294 10:24:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.294 10:24:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.862 10:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:57.862 10:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:57.862 [2024-07-15 10:24:22.470217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb3ac70 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.799 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.058 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.058 "name": "raid_bdev1", 00:16:59.058 "uuid": "5f9ffac4-c72e-4998-900c-e80395d4ea0a", 00:16:59.058 "strip_size_kb": 64, 00:16:59.058 "state": "online", 00:16:59.058 "raid_level": "concat", 00:16:59.058 "superblock": true, 00:16:59.058 "num_base_bdevs": 4, 00:16:59.058 "num_base_bdevs_discovered": 4, 00:16:59.058 "num_base_bdevs_operational": 4, 00:16:59.058 "base_bdevs_list": [ 00:16:59.058 { 00:16:59.058 "name": "BaseBdev1", 00:16:59.058 "uuid": "63f37894-80ed-5c0e-8270-febfb5d3907d", 00:16:59.058 "is_configured": true, 00:16:59.058 "data_offset": 2048, 00:16:59.058 "data_size": 63488 00:16:59.058 }, 00:16:59.058 { 00:16:59.058 "name": "BaseBdev2", 00:16:59.058 "uuid": "bfe365ac-4d33-53ac-8645-84548f9b817c", 00:16:59.058 "is_configured": true, 00:16:59.058 "data_offset": 2048, 00:16:59.058 "data_size": 63488 00:16:59.058 }, 00:16:59.058 { 00:16:59.058 "name": "BaseBdev3", 00:16:59.058 "uuid": "4d095224-a6f2-5181-a51b-84049db284d2", 00:16:59.058 "is_configured": true, 00:16:59.058 "data_offset": 2048, 00:16:59.058 "data_size": 63488 00:16:59.058 }, 00:16:59.058 { 00:16:59.058 "name": "BaseBdev4", 00:16:59.058 "uuid": "658ab778-b8a5-5a8d-974c-ef3c0c196d97", 00:16:59.058 "is_configured": true, 00:16:59.058 "data_offset": 2048, 00:16:59.058 "data_size": 63488 00:16:59.058 } 00:16:59.058 ] 00:16:59.058 }' 00:16:59.058 10:24:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.058 10:24:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.626 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:59.626 [2024-07-15 10:24:24.378310] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.626 [2024-07-15 10:24:24.378339] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.626 [2024-07-15 10:24:24.380363] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.626 [2024-07-15 10:24:24.380388] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.626 [2024-07-15 10:24:24.380412] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.626 [2024-07-15 10:24:24.380419] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb35dd0 name raid_bdev1, state offline 00:16:59.626 0 00:16:59.626 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1830647 00:16:59.626 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1830647 ']' 00:16:59.626 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1830647 00:16:59.626 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:59.626 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1830647 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1830647' 00:16:59.885 killing process with pid 1830647 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1830647 00:16:59.885 [2024-07-15 10:24:24.464596] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1830647 00:16:59.885 [2024-07-15 10:24:24.490075] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.aw5Fu6LxLI 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:16:59.885 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:00.144 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:00.144 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:00.144 10:24:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:17:00.144 00:17:00.144 real 0m5.966s 00:17:00.144 user 0m9.199s 00:17:00.144 sys 0m1.050s 00:17:00.144 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.144 10:24:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.144 ************************************ 00:17:00.144 END TEST raid_read_error_test 00:17:00.144 ************************************ 00:17:00.144 10:24:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:00.144 10:24:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:17:00.144 10:24:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:00.144 10:24:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.144 10:24:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.144 ************************************ 00:17:00.144 START TEST raid_write_error_test 00:17:00.144 ************************************ 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:00.144 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.uV16EyMB2X 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1831796 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1831796 /var/tmp/spdk-raid.sock 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1831796 ']' 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:00.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.145 10:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:00.145 [2024-07-15 10:24:24.801785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:00.145 [2024-07-15 10:24:24.801826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831796 ] 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:00.145 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:00.145 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:00.145 [2024-07-15 10:24:24.893208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.404 [2024-07-15 10:24:24.967652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.404 [2024-07-15 10:24:25.021038] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.404 [2024-07-15 10:24:25.021064] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.971 10:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.971 10:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:00.971 10:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:00.971 10:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:00.971 BaseBdev1_malloc 00:17:00.971 10:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:01.229 true 00:17:01.229 10:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:01.487 [2024-07-15 10:24:26.044958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:01.487 [2024-07-15 10:24:26.044991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.487 [2024-07-15 10:24:26.045007] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2505190 00:17:01.487 [2024-07-15 10:24:26.045016] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.487 [2024-07-15 10:24:26.046195] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.487 [2024-07-15 10:24:26.046216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.487 BaseBdev1 00:17:01.487 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:01.487 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.487 BaseBdev2_malloc 00:17:01.487 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:01.745 true 00:17:01.746 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:01.746 [2024-07-15 10:24:26.529827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:01.746 [2024-07-15 10:24:26.529858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.746 [2024-07-15 10:24:26.529873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2509e20 00:17:01.746 [2024-07-15 10:24:26.529882] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.746 [2024-07-15 10:24:26.530884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.746 [2024-07-15 10:24:26.530911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.746 BaseBdev2 00:17:02.004 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.004 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:02.004 BaseBdev3_malloc 00:17:02.004 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:02.262 true 00:17:02.262 10:24:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:02.262 [2024-07-15 10:24:27.030676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:02.262 [2024-07-15 10:24:27.030709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.262 [2024-07-15 10:24:27.030726] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x250ad90 00:17:02.262 [2024-07-15 10:24:27.030735] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.262 [2024-07-15 10:24:27.031779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.262 [2024-07-15 10:24:27.031804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:02.262 BaseBdev3 00:17:02.262 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:02.262 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:02.520 BaseBdev4_malloc 00:17:02.520 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:17:02.778 true 00:17:02.778 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:02.778 [2024-07-15 10:24:27.511391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:02.778 [2024-07-15 10:24:27.511422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.778 [2024-07-15 10:24:27.511437] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x250d000 00:17:02.778 [2024-07-15 10:24:27.511446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.778 [2024-07-15 10:24:27.512478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.778 [2024-07-15 10:24:27.512498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:02.778 BaseBdev4 00:17:02.778 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:17:03.037 [2024-07-15 10:24:27.667821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.037 [2024-07-15 10:24:27.668692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.037 [2024-07-15 10:24:27.668736] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:03.037 [2024-07-15 10:24:27.668771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.037 [2024-07-15 10:24:27.668922] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x250ddd0 00:17:03.037 [2024-07-15 10:24:27.668929] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:03.037 [2024-07-15 10:24:27.669051] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x250f080 00:17:03.037 [2024-07-15 10:24:27.669145] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x250ddd0 00:17:03.037 [2024-07-15 10:24:27.669151] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x250ddd0 00:17:03.037 [2024-07-15 10:24:27.669213] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.037 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.295 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.295 "name": "raid_bdev1", 00:17:03.295 "uuid": "187fbe50-0f62-4f6e-9a70-dbbd123eaf13", 00:17:03.295 "strip_size_kb": 64, 00:17:03.295 "state": "online", 00:17:03.295 "raid_level": "concat", 00:17:03.295 "superblock": true, 00:17:03.295 "num_base_bdevs": 4, 00:17:03.295 "num_base_bdevs_discovered": 4, 00:17:03.295 "num_base_bdevs_operational": 4, 00:17:03.295 "base_bdevs_list": [ 00:17:03.295 { 00:17:03.295 "name": "BaseBdev1", 00:17:03.295 "uuid": "405e5592-926f-5536-b360-6de688d54854", 00:17:03.295 "is_configured": true, 00:17:03.295 "data_offset": 2048, 00:17:03.295 "data_size": 63488 00:17:03.295 }, 00:17:03.295 { 00:17:03.295 "name": "BaseBdev2", 00:17:03.295 "uuid": "69892e78-5559-5c1d-90db-266502fac8e2", 00:17:03.295 "is_configured": true, 00:17:03.295 "data_offset": 2048, 00:17:03.295 "data_size": 63488 00:17:03.295 }, 00:17:03.295 { 00:17:03.295 "name": "BaseBdev3", 00:17:03.295 "uuid": "8572cb01-52c1-560a-8935-09cb4c5cc48b", 00:17:03.295 "is_configured": true, 00:17:03.295 "data_offset": 2048, 00:17:03.295 "data_size": 63488 00:17:03.295 }, 00:17:03.295 { 00:17:03.295 "name": "BaseBdev4", 00:17:03.295 "uuid": "a26afe82-8c2a-5ca1-9d10-7471568dce69", 00:17:03.295 "is_configured": true, 00:17:03.295 "data_offset": 2048, 00:17:03.295 "data_size": 63488 00:17:03.295 } 00:17:03.295 ] 00:17:03.295 }' 00:17:03.295 10:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.295 10:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.552 10:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:03.552 10:24:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:03.810 [2024-07-15 10:24:28.385911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2512c70 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.744 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.003 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.003 "name": "raid_bdev1", 00:17:05.003 "uuid": "187fbe50-0f62-4f6e-9a70-dbbd123eaf13", 00:17:05.003 "strip_size_kb": 64, 00:17:05.003 "state": "online", 00:17:05.003 "raid_level": "concat", 00:17:05.003 "superblock": true, 00:17:05.003 "num_base_bdevs": 4, 00:17:05.003 "num_base_bdevs_discovered": 4, 00:17:05.003 "num_base_bdevs_operational": 4, 00:17:05.003 "base_bdevs_list": [ 00:17:05.003 { 00:17:05.003 "name": "BaseBdev1", 00:17:05.003 "uuid": "405e5592-926f-5536-b360-6de688d54854", 00:17:05.003 "is_configured": true, 00:17:05.003 "data_offset": 2048, 00:17:05.003 "data_size": 63488 00:17:05.003 }, 00:17:05.003 { 00:17:05.003 "name": "BaseBdev2", 00:17:05.003 "uuid": "69892e78-5559-5c1d-90db-266502fac8e2", 00:17:05.003 "is_configured": true, 00:17:05.003 "data_offset": 2048, 00:17:05.003 "data_size": 63488 00:17:05.003 }, 00:17:05.003 { 00:17:05.003 "name": "BaseBdev3", 00:17:05.004 "uuid": "8572cb01-52c1-560a-8935-09cb4c5cc48b", 00:17:05.004 "is_configured": true, 00:17:05.004 "data_offset": 2048, 00:17:05.004 "data_size": 63488 00:17:05.004 }, 00:17:05.004 { 00:17:05.004 "name": "BaseBdev4", 00:17:05.004 "uuid": "a26afe82-8c2a-5ca1-9d10-7471568dce69", 00:17:05.004 "is_configured": true, 00:17:05.004 "data_offset": 2048, 00:17:05.004 "data_size": 63488 00:17:05.004 } 00:17:05.004 ] 00:17:05.004 }' 00:17:05.004 10:24:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.004 10:24:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:05.716 [2024-07-15 10:24:30.322210] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.716 [2024-07-15 10:24:30.322246] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.716 [2024-07-15 10:24:30.324284] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.716 [2024-07-15 10:24:30.324309] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.716 [2024-07-15 10:24:30.324333] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.716 [2024-07-15 10:24:30.324340] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x250ddd0 name raid_bdev1, state offline 00:17:05.716 0 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1831796 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1831796 ']' 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1831796 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831796 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831796' 00:17:05.716 killing process with pid 1831796 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1831796 00:17:05.716 [2024-07-15 10:24:30.395291] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.716 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1831796 00:17:05.716 [2024-07-15 10:24:30.419959] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.uV16EyMB2X 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:17:05.974 00:17:05.974 real 0m5.865s 00:17:05.974 user 0m9.002s 00:17:05.974 sys 0m1.062s 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.974 10:24:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.974 ************************************ 00:17:05.974 END TEST raid_write_error_test 00:17:05.974 ************************************ 00:17:05.974 10:24:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:05.974 10:24:30 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:05.975 10:24:30 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:05.975 10:24:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:05.975 10:24:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.975 10:24:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.975 ************************************ 00:17:05.975 START TEST raid_state_function_test 00:17:05.975 ************************************ 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1832957 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1832957' 00:17:05.975 Process raid pid: 1832957 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1832957 /var/tmp/spdk-raid.sock 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 1832957 ']' 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:05.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.975 10:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.975 [2024-07-15 10:24:30.739299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:05.975 [2024-07-15 10:24:30.739342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.233 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:06.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:06.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:06.234 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:06.234 [2024-07-15 10:24:30.830717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.234 [2024-07-15 10:24:30.903625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.234 [2024-07-15 10:24:30.954235] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.234 [2024-07-15 10:24:30.954262] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.799 10:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.799 10:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:06.799 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:07.057 [2024-07-15 10:24:31.684782] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.057 [2024-07-15 10:24:31.684814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.057 [2024-07-15 10:24:31.684821] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.057 [2024-07-15 10:24:31.684828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.057 [2024-07-15 10:24:31.684833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.057 [2024-07-15 10:24:31.684841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.057 [2024-07-15 10:24:31.684846] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:07.057 [2024-07-15 10:24:31.684853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.057 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.315 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.315 "name": "Existed_Raid", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "strip_size_kb": 0, 00:17:07.315 "state": "configuring", 00:17:07.315 "raid_level": "raid1", 00:17:07.315 "superblock": false, 00:17:07.315 "num_base_bdevs": 4, 00:17:07.315 "num_base_bdevs_discovered": 0, 00:17:07.315 "num_base_bdevs_operational": 4, 00:17:07.315 "base_bdevs_list": [ 00:17:07.315 { 00:17:07.315 "name": "BaseBdev1", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "is_configured": false, 00:17:07.315 "data_offset": 0, 00:17:07.315 "data_size": 0 00:17:07.315 }, 00:17:07.315 { 00:17:07.315 "name": "BaseBdev2", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "is_configured": false, 00:17:07.315 "data_offset": 0, 00:17:07.315 "data_size": 0 00:17:07.315 }, 00:17:07.315 { 00:17:07.315 "name": "BaseBdev3", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "is_configured": false, 00:17:07.315 "data_offset": 0, 00:17:07.315 "data_size": 0 00:17:07.315 }, 00:17:07.315 { 00:17:07.315 "name": "BaseBdev4", 00:17:07.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.315 "is_configured": false, 00:17:07.315 "data_offset": 0, 00:17:07.315 "data_size": 0 00:17:07.315 } 00:17:07.315 ] 00:17:07.315 }' 00:17:07.315 10:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.315 10:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.573 10:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:07.831 [2024-07-15 10:24:32.470728] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.831 [2024-07-15 10:24:32.470749] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11dcf60 name Existed_Raid, state configuring 00:17:07.831 10:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:08.089 [2024-07-15 10:24:32.639167] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.089 [2024-07-15 10:24:32.639187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.089 [2024-07-15 10:24:32.639193] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.089 [2024-07-15 10:24:32.639201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.089 [2024-07-15 10:24:32.639206] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.089 [2024-07-15 10:24:32.639213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.089 [2024-07-15 10:24:32.639219] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:08.089 [2024-07-15 10:24:32.639226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.089 [2024-07-15 10:24:32.820029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.089 BaseBdev1 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:08.089 10:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.346 10:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.604 [ 00:17:08.604 { 00:17:08.604 "name": "BaseBdev1", 00:17:08.604 "aliases": [ 00:17:08.604 "71eeb48f-718d-447e-a12b-5cded993f2ff" 00:17:08.604 ], 00:17:08.604 "product_name": "Malloc disk", 00:17:08.604 "block_size": 512, 00:17:08.604 "num_blocks": 65536, 00:17:08.604 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:08.604 "assigned_rate_limits": { 00:17:08.604 "rw_ios_per_sec": 0, 00:17:08.604 "rw_mbytes_per_sec": 0, 00:17:08.604 "r_mbytes_per_sec": 0, 00:17:08.604 "w_mbytes_per_sec": 0 00:17:08.604 }, 00:17:08.604 "claimed": true, 00:17:08.604 "claim_type": "exclusive_write", 00:17:08.604 "zoned": false, 00:17:08.604 "supported_io_types": { 00:17:08.604 "read": true, 00:17:08.604 "write": true, 00:17:08.604 "unmap": true, 00:17:08.604 "flush": true, 00:17:08.604 "reset": true, 00:17:08.604 "nvme_admin": false, 00:17:08.604 "nvme_io": false, 00:17:08.604 "nvme_io_md": false, 00:17:08.604 "write_zeroes": true, 00:17:08.604 "zcopy": true, 00:17:08.604 "get_zone_info": false, 00:17:08.604 "zone_management": false, 00:17:08.604 "zone_append": false, 00:17:08.604 "compare": false, 00:17:08.604 "compare_and_write": false, 00:17:08.604 "abort": true, 00:17:08.604 "seek_hole": false, 00:17:08.604 "seek_data": false, 00:17:08.604 "copy": true, 00:17:08.604 "nvme_iov_md": false 00:17:08.604 }, 00:17:08.604 "memory_domains": [ 00:17:08.604 { 00:17:08.604 "dma_device_id": "system", 00:17:08.604 "dma_device_type": 1 00:17:08.604 }, 00:17:08.604 { 00:17:08.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.604 "dma_device_type": 2 00:17:08.604 } 00:17:08.604 ], 00:17:08.604 "driver_specific": {} 00:17:08.604 } 00:17:08.604 ] 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.604 "name": "Existed_Raid", 00:17:08.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.604 "strip_size_kb": 0, 00:17:08.604 "state": "configuring", 00:17:08.604 "raid_level": "raid1", 00:17:08.604 "superblock": false, 00:17:08.604 "num_base_bdevs": 4, 00:17:08.604 "num_base_bdevs_discovered": 1, 00:17:08.604 "num_base_bdevs_operational": 4, 00:17:08.604 "base_bdevs_list": [ 00:17:08.604 { 00:17:08.604 "name": "BaseBdev1", 00:17:08.604 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:08.604 "is_configured": true, 00:17:08.604 "data_offset": 0, 00:17:08.604 "data_size": 65536 00:17:08.604 }, 00:17:08.604 { 00:17:08.604 "name": "BaseBdev2", 00:17:08.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.604 "is_configured": false, 00:17:08.604 "data_offset": 0, 00:17:08.604 "data_size": 0 00:17:08.604 }, 00:17:08.604 { 00:17:08.604 "name": "BaseBdev3", 00:17:08.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.604 "is_configured": false, 00:17:08.604 "data_offset": 0, 00:17:08.604 "data_size": 0 00:17:08.604 }, 00:17:08.604 { 00:17:08.604 "name": "BaseBdev4", 00:17:08.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.604 "is_configured": false, 00:17:08.604 "data_offset": 0, 00:17:08.604 "data_size": 0 00:17:08.604 } 00:17:08.604 ] 00:17:08.604 }' 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.604 10:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.170 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:09.170 [2024-07-15 10:24:33.930871] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.170 [2024-07-15 10:24:33.930900] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11dc7d0 name Existed_Raid, state configuring 00:17:09.170 10:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:09.429 [2024-07-15 10:24:34.099328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.429 [2024-07-15 10:24:34.100355] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.429 [2024-07-15 10:24:34.100381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.429 [2024-07-15 10:24:34.100387] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.429 [2024-07-15 10:24:34.100394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.429 [2024-07-15 10:24:34.100400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:09.429 [2024-07-15 10:24:34.100406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.429 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.688 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.688 "name": "Existed_Raid", 00:17:09.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.688 "strip_size_kb": 0, 00:17:09.688 "state": "configuring", 00:17:09.688 "raid_level": "raid1", 00:17:09.688 "superblock": false, 00:17:09.688 "num_base_bdevs": 4, 00:17:09.688 "num_base_bdevs_discovered": 1, 00:17:09.688 "num_base_bdevs_operational": 4, 00:17:09.688 "base_bdevs_list": [ 00:17:09.688 { 00:17:09.688 "name": "BaseBdev1", 00:17:09.688 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:09.688 "is_configured": true, 00:17:09.688 "data_offset": 0, 00:17:09.688 "data_size": 65536 00:17:09.688 }, 00:17:09.688 { 00:17:09.688 "name": "BaseBdev2", 00:17:09.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.688 "is_configured": false, 00:17:09.688 "data_offset": 0, 00:17:09.688 "data_size": 0 00:17:09.688 }, 00:17:09.688 { 00:17:09.688 "name": "BaseBdev3", 00:17:09.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.688 "is_configured": false, 00:17:09.688 "data_offset": 0, 00:17:09.688 "data_size": 0 00:17:09.688 }, 00:17:09.688 { 00:17:09.688 "name": "BaseBdev4", 00:17:09.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.688 "is_configured": false, 00:17:09.688 "data_offset": 0, 00:17:09.688 "data_size": 0 00:17:09.688 } 00:17:09.688 ] 00:17:09.688 }' 00:17:09.688 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.688 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.254 [2024-07-15 10:24:34.900115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.254 BaseBdev2 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.254 10:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.513 [ 00:17:10.513 { 00:17:10.513 "name": "BaseBdev2", 00:17:10.513 "aliases": [ 00:17:10.513 "7a912475-8eda-4ce5-981d-442e7a2bad53" 00:17:10.513 ], 00:17:10.513 "product_name": "Malloc disk", 00:17:10.513 "block_size": 512, 00:17:10.513 "num_blocks": 65536, 00:17:10.513 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:10.513 "assigned_rate_limits": { 00:17:10.513 "rw_ios_per_sec": 0, 00:17:10.513 "rw_mbytes_per_sec": 0, 00:17:10.513 "r_mbytes_per_sec": 0, 00:17:10.513 "w_mbytes_per_sec": 0 00:17:10.513 }, 00:17:10.513 "claimed": true, 00:17:10.513 "claim_type": "exclusive_write", 00:17:10.513 "zoned": false, 00:17:10.513 "supported_io_types": { 00:17:10.513 "read": true, 00:17:10.513 "write": true, 00:17:10.513 "unmap": true, 00:17:10.513 "flush": true, 00:17:10.513 "reset": true, 00:17:10.513 "nvme_admin": false, 00:17:10.513 "nvme_io": false, 00:17:10.513 "nvme_io_md": false, 00:17:10.513 "write_zeroes": true, 00:17:10.513 "zcopy": true, 00:17:10.513 "get_zone_info": false, 00:17:10.513 "zone_management": false, 00:17:10.513 "zone_append": false, 00:17:10.513 "compare": false, 00:17:10.513 "compare_and_write": false, 00:17:10.513 "abort": true, 00:17:10.513 "seek_hole": false, 00:17:10.513 "seek_data": false, 00:17:10.513 "copy": true, 00:17:10.513 "nvme_iov_md": false 00:17:10.513 }, 00:17:10.513 "memory_domains": [ 00:17:10.513 { 00:17:10.513 "dma_device_id": "system", 00:17:10.513 "dma_device_type": 1 00:17:10.513 }, 00:17:10.513 { 00:17:10.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.513 "dma_device_type": 2 00:17:10.513 } 00:17:10.513 ], 00:17:10.513 "driver_specific": {} 00:17:10.513 } 00:17:10.513 ] 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.513 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.771 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.771 "name": "Existed_Raid", 00:17:10.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.771 "strip_size_kb": 0, 00:17:10.771 "state": "configuring", 00:17:10.771 "raid_level": "raid1", 00:17:10.771 "superblock": false, 00:17:10.771 "num_base_bdevs": 4, 00:17:10.771 "num_base_bdevs_discovered": 2, 00:17:10.771 "num_base_bdevs_operational": 4, 00:17:10.771 "base_bdevs_list": [ 00:17:10.771 { 00:17:10.771 "name": "BaseBdev1", 00:17:10.771 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:10.771 "is_configured": true, 00:17:10.771 "data_offset": 0, 00:17:10.771 "data_size": 65536 00:17:10.771 }, 00:17:10.771 { 00:17:10.771 "name": "BaseBdev2", 00:17:10.771 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:10.771 "is_configured": true, 00:17:10.771 "data_offset": 0, 00:17:10.771 "data_size": 65536 00:17:10.771 }, 00:17:10.771 { 00:17:10.771 "name": "BaseBdev3", 00:17:10.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.771 "is_configured": false, 00:17:10.771 "data_offset": 0, 00:17:10.771 "data_size": 0 00:17:10.771 }, 00:17:10.771 { 00:17:10.771 "name": "BaseBdev4", 00:17:10.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.771 "is_configured": false, 00:17:10.771 "data_offset": 0, 00:17:10.771 "data_size": 0 00:17:10.771 } 00:17:10.771 ] 00:17:10.771 }' 00:17:10.771 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.771 10:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.338 10:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:11.338 [2024-07-15 10:24:36.033706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:11.338 BaseBdev3 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.338 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.597 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:11.597 [ 00:17:11.597 { 00:17:11.597 "name": "BaseBdev3", 00:17:11.597 "aliases": [ 00:17:11.597 "90774936-88e4-41b3-88f6-df8bbe91835b" 00:17:11.597 ], 00:17:11.597 "product_name": "Malloc disk", 00:17:11.597 "block_size": 512, 00:17:11.597 "num_blocks": 65536, 00:17:11.597 "uuid": "90774936-88e4-41b3-88f6-df8bbe91835b", 00:17:11.597 "assigned_rate_limits": { 00:17:11.597 "rw_ios_per_sec": 0, 00:17:11.597 "rw_mbytes_per_sec": 0, 00:17:11.597 "r_mbytes_per_sec": 0, 00:17:11.597 "w_mbytes_per_sec": 0 00:17:11.597 }, 00:17:11.597 "claimed": true, 00:17:11.597 "claim_type": "exclusive_write", 00:17:11.597 "zoned": false, 00:17:11.597 "supported_io_types": { 00:17:11.597 "read": true, 00:17:11.597 "write": true, 00:17:11.597 "unmap": true, 00:17:11.597 "flush": true, 00:17:11.597 "reset": true, 00:17:11.597 "nvme_admin": false, 00:17:11.597 "nvme_io": false, 00:17:11.597 "nvme_io_md": false, 00:17:11.597 "write_zeroes": true, 00:17:11.597 "zcopy": true, 00:17:11.597 "get_zone_info": false, 00:17:11.597 "zone_management": false, 00:17:11.597 "zone_append": false, 00:17:11.597 "compare": false, 00:17:11.597 "compare_and_write": false, 00:17:11.597 "abort": true, 00:17:11.597 "seek_hole": false, 00:17:11.597 "seek_data": false, 00:17:11.597 "copy": true, 00:17:11.597 "nvme_iov_md": false 00:17:11.597 }, 00:17:11.597 "memory_domains": [ 00:17:11.597 { 00:17:11.597 "dma_device_id": "system", 00:17:11.597 "dma_device_type": 1 00:17:11.597 }, 00:17:11.597 { 00:17:11.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.597 "dma_device_type": 2 00:17:11.597 } 00:17:11.597 ], 00:17:11.597 "driver_specific": {} 00:17:11.597 } 00:17:11.597 ] 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.857 "name": "Existed_Raid", 00:17:11.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.857 "strip_size_kb": 0, 00:17:11.857 "state": "configuring", 00:17:11.857 "raid_level": "raid1", 00:17:11.857 "superblock": false, 00:17:11.857 "num_base_bdevs": 4, 00:17:11.857 "num_base_bdevs_discovered": 3, 00:17:11.857 "num_base_bdevs_operational": 4, 00:17:11.857 "base_bdevs_list": [ 00:17:11.857 { 00:17:11.857 "name": "BaseBdev1", 00:17:11.857 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:11.857 "is_configured": true, 00:17:11.857 "data_offset": 0, 00:17:11.857 "data_size": 65536 00:17:11.857 }, 00:17:11.857 { 00:17:11.857 "name": "BaseBdev2", 00:17:11.857 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:11.857 "is_configured": true, 00:17:11.857 "data_offset": 0, 00:17:11.857 "data_size": 65536 00:17:11.857 }, 00:17:11.857 { 00:17:11.857 "name": "BaseBdev3", 00:17:11.857 "uuid": "90774936-88e4-41b3-88f6-df8bbe91835b", 00:17:11.857 "is_configured": true, 00:17:11.857 "data_offset": 0, 00:17:11.857 "data_size": 65536 00:17:11.857 }, 00:17:11.857 { 00:17:11.857 "name": "BaseBdev4", 00:17:11.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.857 "is_configured": false, 00:17:11.857 "data_offset": 0, 00:17:11.857 "data_size": 0 00:17:11.857 } 00:17:11.857 ] 00:17:11.857 }' 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.857 10:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:12.435 [2024-07-15 10:24:37.183348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:12.435 [2024-07-15 10:24:37.183375] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11dd830 00:17:12.435 [2024-07-15 10:24:37.183380] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:12.435 [2024-07-15 10:24:37.183508] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11d6280 00:17:12.435 [2024-07-15 10:24:37.183594] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11dd830 00:17:12.435 [2024-07-15 10:24:37.183600] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11dd830 00:17:12.435 [2024-07-15 10:24:37.183712] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.435 BaseBdev4 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:12.435 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.692 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:12.950 [ 00:17:12.950 { 00:17:12.950 "name": "BaseBdev4", 00:17:12.950 "aliases": [ 00:17:12.950 "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9" 00:17:12.950 ], 00:17:12.950 "product_name": "Malloc disk", 00:17:12.950 "block_size": 512, 00:17:12.950 "num_blocks": 65536, 00:17:12.950 "uuid": "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9", 00:17:12.950 "assigned_rate_limits": { 00:17:12.950 "rw_ios_per_sec": 0, 00:17:12.950 "rw_mbytes_per_sec": 0, 00:17:12.950 "r_mbytes_per_sec": 0, 00:17:12.950 "w_mbytes_per_sec": 0 00:17:12.950 }, 00:17:12.950 "claimed": true, 00:17:12.950 "claim_type": "exclusive_write", 00:17:12.950 "zoned": false, 00:17:12.950 "supported_io_types": { 00:17:12.950 "read": true, 00:17:12.951 "write": true, 00:17:12.951 "unmap": true, 00:17:12.951 "flush": true, 00:17:12.951 "reset": true, 00:17:12.951 "nvme_admin": false, 00:17:12.951 "nvme_io": false, 00:17:12.951 "nvme_io_md": false, 00:17:12.951 "write_zeroes": true, 00:17:12.951 "zcopy": true, 00:17:12.951 "get_zone_info": false, 00:17:12.951 "zone_management": false, 00:17:12.951 "zone_append": false, 00:17:12.951 "compare": false, 00:17:12.951 "compare_and_write": false, 00:17:12.951 "abort": true, 00:17:12.951 "seek_hole": false, 00:17:12.951 "seek_data": false, 00:17:12.951 "copy": true, 00:17:12.951 "nvme_iov_md": false 00:17:12.951 }, 00:17:12.951 "memory_domains": [ 00:17:12.951 { 00:17:12.951 "dma_device_id": "system", 00:17:12.951 "dma_device_type": 1 00:17:12.951 }, 00:17:12.951 { 00:17:12.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.951 "dma_device_type": 2 00:17:12.951 } 00:17:12.951 ], 00:17:12.951 "driver_specific": {} 00:17:12.951 } 00:17:12.951 ] 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.951 "name": "Existed_Raid", 00:17:12.951 "uuid": "bfc0c4ab-116a-4e51-8c66-efe0c34256e5", 00:17:12.951 "strip_size_kb": 0, 00:17:12.951 "state": "online", 00:17:12.951 "raid_level": "raid1", 00:17:12.951 "superblock": false, 00:17:12.951 "num_base_bdevs": 4, 00:17:12.951 "num_base_bdevs_discovered": 4, 00:17:12.951 "num_base_bdevs_operational": 4, 00:17:12.951 "base_bdevs_list": [ 00:17:12.951 { 00:17:12.951 "name": "BaseBdev1", 00:17:12.951 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:12.951 "is_configured": true, 00:17:12.951 "data_offset": 0, 00:17:12.951 "data_size": 65536 00:17:12.951 }, 00:17:12.951 { 00:17:12.951 "name": "BaseBdev2", 00:17:12.951 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:12.951 "is_configured": true, 00:17:12.951 "data_offset": 0, 00:17:12.951 "data_size": 65536 00:17:12.951 }, 00:17:12.951 { 00:17:12.951 "name": "BaseBdev3", 00:17:12.951 "uuid": "90774936-88e4-41b3-88f6-df8bbe91835b", 00:17:12.951 "is_configured": true, 00:17:12.951 "data_offset": 0, 00:17:12.951 "data_size": 65536 00:17:12.951 }, 00:17:12.951 { 00:17:12.951 "name": "BaseBdev4", 00:17:12.951 "uuid": "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9", 00:17:12.951 "is_configured": true, 00:17:12.951 "data_offset": 0, 00:17:12.951 "data_size": 65536 00:17:12.951 } 00:17:12.951 ] 00:17:12.951 }' 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.951 10:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:13.517 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:13.776 [2024-07-15 10:24:38.330525] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:13.776 "name": "Existed_Raid", 00:17:13.776 "aliases": [ 00:17:13.776 "bfc0c4ab-116a-4e51-8c66-efe0c34256e5" 00:17:13.776 ], 00:17:13.776 "product_name": "Raid Volume", 00:17:13.776 "block_size": 512, 00:17:13.776 "num_blocks": 65536, 00:17:13.776 "uuid": "bfc0c4ab-116a-4e51-8c66-efe0c34256e5", 00:17:13.776 "assigned_rate_limits": { 00:17:13.776 "rw_ios_per_sec": 0, 00:17:13.776 "rw_mbytes_per_sec": 0, 00:17:13.776 "r_mbytes_per_sec": 0, 00:17:13.776 "w_mbytes_per_sec": 0 00:17:13.776 }, 00:17:13.776 "claimed": false, 00:17:13.776 "zoned": false, 00:17:13.776 "supported_io_types": { 00:17:13.776 "read": true, 00:17:13.776 "write": true, 00:17:13.776 "unmap": false, 00:17:13.776 "flush": false, 00:17:13.776 "reset": true, 00:17:13.776 "nvme_admin": false, 00:17:13.776 "nvme_io": false, 00:17:13.776 "nvme_io_md": false, 00:17:13.776 "write_zeroes": true, 00:17:13.776 "zcopy": false, 00:17:13.776 "get_zone_info": false, 00:17:13.776 "zone_management": false, 00:17:13.776 "zone_append": false, 00:17:13.776 "compare": false, 00:17:13.776 "compare_and_write": false, 00:17:13.776 "abort": false, 00:17:13.776 "seek_hole": false, 00:17:13.776 "seek_data": false, 00:17:13.776 "copy": false, 00:17:13.776 "nvme_iov_md": false 00:17:13.776 }, 00:17:13.776 "memory_domains": [ 00:17:13.776 { 00:17:13.776 "dma_device_id": "system", 00:17:13.776 "dma_device_type": 1 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.776 "dma_device_type": 2 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "system", 00:17:13.776 "dma_device_type": 1 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.776 "dma_device_type": 2 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "system", 00:17:13.776 "dma_device_type": 1 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.776 "dma_device_type": 2 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "system", 00:17:13.776 "dma_device_type": 1 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.776 "dma_device_type": 2 00:17:13.776 } 00:17:13.776 ], 00:17:13.776 "driver_specific": { 00:17:13.776 "raid": { 00:17:13.776 "uuid": "bfc0c4ab-116a-4e51-8c66-efe0c34256e5", 00:17:13.776 "strip_size_kb": 0, 00:17:13.776 "state": "online", 00:17:13.776 "raid_level": "raid1", 00:17:13.776 "superblock": false, 00:17:13.776 "num_base_bdevs": 4, 00:17:13.776 "num_base_bdevs_discovered": 4, 00:17:13.776 "num_base_bdevs_operational": 4, 00:17:13.776 "base_bdevs_list": [ 00:17:13.776 { 00:17:13.776 "name": "BaseBdev1", 00:17:13.776 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:13.776 "is_configured": true, 00:17:13.776 "data_offset": 0, 00:17:13.776 "data_size": 65536 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "name": "BaseBdev2", 00:17:13.776 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:13.776 "is_configured": true, 00:17:13.776 "data_offset": 0, 00:17:13.776 "data_size": 65536 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "name": "BaseBdev3", 00:17:13.776 "uuid": "90774936-88e4-41b3-88f6-df8bbe91835b", 00:17:13.776 "is_configured": true, 00:17:13.776 "data_offset": 0, 00:17:13.776 "data_size": 65536 00:17:13.776 }, 00:17:13.776 { 00:17:13.776 "name": "BaseBdev4", 00:17:13.776 "uuid": "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9", 00:17:13.776 "is_configured": true, 00:17:13.776 "data_offset": 0, 00:17:13.776 "data_size": 65536 00:17:13.776 } 00:17:13.776 ] 00:17:13.776 } 00:17:13.776 } 00:17:13.776 }' 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:13.776 BaseBdev2 00:17:13.776 BaseBdev3 00:17:13.776 BaseBdev4' 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.776 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.776 "name": "BaseBdev1", 00:17:13.776 "aliases": [ 00:17:13.776 "71eeb48f-718d-447e-a12b-5cded993f2ff" 00:17:13.776 ], 00:17:13.776 "product_name": "Malloc disk", 00:17:13.776 "block_size": 512, 00:17:13.776 "num_blocks": 65536, 00:17:13.776 "uuid": "71eeb48f-718d-447e-a12b-5cded993f2ff", 00:17:13.776 "assigned_rate_limits": { 00:17:13.776 "rw_ios_per_sec": 0, 00:17:13.776 "rw_mbytes_per_sec": 0, 00:17:13.776 "r_mbytes_per_sec": 0, 00:17:13.776 "w_mbytes_per_sec": 0 00:17:13.776 }, 00:17:13.776 "claimed": true, 00:17:13.777 "claim_type": "exclusive_write", 00:17:13.777 "zoned": false, 00:17:13.777 "supported_io_types": { 00:17:13.777 "read": true, 00:17:13.777 "write": true, 00:17:13.777 "unmap": true, 00:17:13.777 "flush": true, 00:17:13.777 "reset": true, 00:17:13.777 "nvme_admin": false, 00:17:13.777 "nvme_io": false, 00:17:13.777 "nvme_io_md": false, 00:17:13.777 "write_zeroes": true, 00:17:13.777 "zcopy": true, 00:17:13.777 "get_zone_info": false, 00:17:13.777 "zone_management": false, 00:17:13.777 "zone_append": false, 00:17:13.777 "compare": false, 00:17:13.777 "compare_and_write": false, 00:17:13.777 "abort": true, 00:17:13.777 "seek_hole": false, 00:17:13.777 "seek_data": false, 00:17:13.777 "copy": true, 00:17:13.777 "nvme_iov_md": false 00:17:13.777 }, 00:17:13.777 "memory_domains": [ 00:17:13.777 { 00:17:13.777 "dma_device_id": "system", 00:17:13.777 "dma_device_type": 1 00:17:13.777 }, 00:17:13.777 { 00:17:13.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.777 "dma_device_type": 2 00:17:13.777 } 00:17:13.777 ], 00:17:13.777 "driver_specific": {} 00:17:13.777 }' 00:17:13.777 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.035 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.294 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.294 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.294 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.294 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:14.294 10:24:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.294 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.294 "name": "BaseBdev2", 00:17:14.294 "aliases": [ 00:17:14.294 "7a912475-8eda-4ce5-981d-442e7a2bad53" 00:17:14.294 ], 00:17:14.294 "product_name": "Malloc disk", 00:17:14.294 "block_size": 512, 00:17:14.294 "num_blocks": 65536, 00:17:14.294 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:14.294 "assigned_rate_limits": { 00:17:14.294 "rw_ios_per_sec": 0, 00:17:14.294 "rw_mbytes_per_sec": 0, 00:17:14.294 "r_mbytes_per_sec": 0, 00:17:14.294 "w_mbytes_per_sec": 0 00:17:14.294 }, 00:17:14.294 "claimed": true, 00:17:14.294 "claim_type": "exclusive_write", 00:17:14.294 "zoned": false, 00:17:14.294 "supported_io_types": { 00:17:14.294 "read": true, 00:17:14.294 "write": true, 00:17:14.294 "unmap": true, 00:17:14.294 "flush": true, 00:17:14.294 "reset": true, 00:17:14.294 "nvme_admin": false, 00:17:14.294 "nvme_io": false, 00:17:14.294 "nvme_io_md": false, 00:17:14.294 "write_zeroes": true, 00:17:14.294 "zcopy": true, 00:17:14.294 "get_zone_info": false, 00:17:14.294 "zone_management": false, 00:17:14.294 "zone_append": false, 00:17:14.294 "compare": false, 00:17:14.294 "compare_and_write": false, 00:17:14.294 "abort": true, 00:17:14.294 "seek_hole": false, 00:17:14.294 "seek_data": false, 00:17:14.294 "copy": true, 00:17:14.294 "nvme_iov_md": false 00:17:14.294 }, 00:17:14.294 "memory_domains": [ 00:17:14.294 { 00:17:14.294 "dma_device_id": "system", 00:17:14.294 "dma_device_type": 1 00:17:14.294 }, 00:17:14.294 { 00:17:14.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.294 "dma_device_type": 2 00:17:14.294 } 00:17:14.294 ], 00:17:14.294 "driver_specific": {} 00:17:14.294 }' 00:17:14.294 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.553 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.811 "name": "BaseBdev3", 00:17:14.811 "aliases": [ 00:17:14.811 "90774936-88e4-41b3-88f6-df8bbe91835b" 00:17:14.811 ], 00:17:14.811 "product_name": "Malloc disk", 00:17:14.811 "block_size": 512, 00:17:14.811 "num_blocks": 65536, 00:17:14.811 "uuid": "90774936-88e4-41b3-88f6-df8bbe91835b", 00:17:14.811 "assigned_rate_limits": { 00:17:14.811 "rw_ios_per_sec": 0, 00:17:14.811 "rw_mbytes_per_sec": 0, 00:17:14.811 "r_mbytes_per_sec": 0, 00:17:14.811 "w_mbytes_per_sec": 0 00:17:14.811 }, 00:17:14.811 "claimed": true, 00:17:14.811 "claim_type": "exclusive_write", 00:17:14.811 "zoned": false, 00:17:14.811 "supported_io_types": { 00:17:14.811 "read": true, 00:17:14.811 "write": true, 00:17:14.811 "unmap": true, 00:17:14.811 "flush": true, 00:17:14.811 "reset": true, 00:17:14.811 "nvme_admin": false, 00:17:14.811 "nvme_io": false, 00:17:14.811 "nvme_io_md": false, 00:17:14.811 "write_zeroes": true, 00:17:14.811 "zcopy": true, 00:17:14.811 "get_zone_info": false, 00:17:14.811 "zone_management": false, 00:17:14.811 "zone_append": false, 00:17:14.811 "compare": false, 00:17:14.811 "compare_and_write": false, 00:17:14.811 "abort": true, 00:17:14.811 "seek_hole": false, 00:17:14.811 "seek_data": false, 00:17:14.811 "copy": true, 00:17:14.811 "nvme_iov_md": false 00:17:14.811 }, 00:17:14.811 "memory_domains": [ 00:17:14.811 { 00:17:14.811 "dma_device_id": "system", 00:17:14.811 "dma_device_type": 1 00:17:14.811 }, 00:17:14.811 { 00:17:14.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.811 "dma_device_type": 2 00:17:14.811 } 00:17:14.811 ], 00:17:14.811 "driver_specific": {} 00:17:14.811 }' 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.811 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:15.069 10:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.328 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.328 "name": "BaseBdev4", 00:17:15.328 "aliases": [ 00:17:15.328 "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9" 00:17:15.328 ], 00:17:15.328 "product_name": "Malloc disk", 00:17:15.328 "block_size": 512, 00:17:15.328 "num_blocks": 65536, 00:17:15.328 "uuid": "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9", 00:17:15.328 "assigned_rate_limits": { 00:17:15.328 "rw_ios_per_sec": 0, 00:17:15.328 "rw_mbytes_per_sec": 0, 00:17:15.328 "r_mbytes_per_sec": 0, 00:17:15.328 "w_mbytes_per_sec": 0 00:17:15.328 }, 00:17:15.328 "claimed": true, 00:17:15.328 "claim_type": "exclusive_write", 00:17:15.328 "zoned": false, 00:17:15.328 "supported_io_types": { 00:17:15.328 "read": true, 00:17:15.328 "write": true, 00:17:15.328 "unmap": true, 00:17:15.328 "flush": true, 00:17:15.328 "reset": true, 00:17:15.328 "nvme_admin": false, 00:17:15.328 "nvme_io": false, 00:17:15.328 "nvme_io_md": false, 00:17:15.328 "write_zeroes": true, 00:17:15.328 "zcopy": true, 00:17:15.328 "get_zone_info": false, 00:17:15.328 "zone_management": false, 00:17:15.328 "zone_append": false, 00:17:15.328 "compare": false, 00:17:15.328 "compare_and_write": false, 00:17:15.328 "abort": true, 00:17:15.328 "seek_hole": false, 00:17:15.328 "seek_data": false, 00:17:15.328 "copy": true, 00:17:15.328 "nvme_iov_md": false 00:17:15.328 }, 00:17:15.328 "memory_domains": [ 00:17:15.328 { 00:17:15.328 "dma_device_id": "system", 00:17:15.328 "dma_device_type": 1 00:17:15.328 }, 00:17:15.328 { 00:17:15.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.328 "dma_device_type": 2 00:17:15.328 } 00:17:15.328 ], 00:17:15.328 "driver_specific": {} 00:17:15.328 }' 00:17:15.328 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.328 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.328 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:15.328 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.586 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.845 [2024-07-15 10:24:40.471889] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.845 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.104 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.104 "name": "Existed_Raid", 00:17:16.104 "uuid": "bfc0c4ab-116a-4e51-8c66-efe0c34256e5", 00:17:16.104 "strip_size_kb": 0, 00:17:16.104 "state": "online", 00:17:16.104 "raid_level": "raid1", 00:17:16.104 "superblock": false, 00:17:16.104 "num_base_bdevs": 4, 00:17:16.104 "num_base_bdevs_discovered": 3, 00:17:16.104 "num_base_bdevs_operational": 3, 00:17:16.104 "base_bdevs_list": [ 00:17:16.104 { 00:17:16.104 "name": null, 00:17:16.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.104 "is_configured": false, 00:17:16.104 "data_offset": 0, 00:17:16.104 "data_size": 65536 00:17:16.104 }, 00:17:16.104 { 00:17:16.104 "name": "BaseBdev2", 00:17:16.104 "uuid": "7a912475-8eda-4ce5-981d-442e7a2bad53", 00:17:16.104 "is_configured": true, 00:17:16.104 "data_offset": 0, 00:17:16.104 "data_size": 65536 00:17:16.104 }, 00:17:16.104 { 00:17:16.104 "name": "BaseBdev3", 00:17:16.104 "uuid": "90774936-88e4-41b3-88f6-df8bbe91835b", 00:17:16.104 "is_configured": true, 00:17:16.104 "data_offset": 0, 00:17:16.104 "data_size": 65536 00:17:16.104 }, 00:17:16.104 { 00:17:16.104 "name": "BaseBdev4", 00:17:16.104 "uuid": "6c3c5ad1-8ff8-484f-9324-c8ebdb7568e9", 00:17:16.104 "is_configured": true, 00:17:16.104 "data_offset": 0, 00:17:16.104 "data_size": 65536 00:17:16.104 } 00:17:16.104 ] 00:17:16.104 }' 00:17:16.104 10:24:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.104 10:24:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.363 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:16.363 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:16.363 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.363 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:16.621 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:16.621 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.621 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:16.880 [2024-07-15 10:24:41.479448] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.880 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:16.880 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:16.880 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.880 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:17.138 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:17.139 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.139 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:17.139 [2024-07-15 10:24:41.830099] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:17.139 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:17.139 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:17.139 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.139 10:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:17.398 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:17.398 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.398 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:17.398 [2024-07-15 10:24:42.172427] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:17.398 [2024-07-15 10:24:42.172478] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.398 [2024-07-15 10:24:42.182171] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.398 [2024-07-15 10:24:42.182196] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.398 [2024-07-15 10:24:42.182204] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11dd830 name Existed_Raid, state offline 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:17.657 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.915 BaseBdev2 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.915 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:18.216 [ 00:17:18.216 { 00:17:18.216 "name": "BaseBdev2", 00:17:18.216 "aliases": [ 00:17:18.216 "0de61f68-2bcd-4920-a525-99a8a0b51fe2" 00:17:18.216 ], 00:17:18.216 "product_name": "Malloc disk", 00:17:18.216 "block_size": 512, 00:17:18.216 "num_blocks": 65536, 00:17:18.216 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:18.216 "assigned_rate_limits": { 00:17:18.216 "rw_ios_per_sec": 0, 00:17:18.216 "rw_mbytes_per_sec": 0, 00:17:18.216 "r_mbytes_per_sec": 0, 00:17:18.216 "w_mbytes_per_sec": 0 00:17:18.216 }, 00:17:18.216 "claimed": false, 00:17:18.216 "zoned": false, 00:17:18.216 "supported_io_types": { 00:17:18.216 "read": true, 00:17:18.216 "write": true, 00:17:18.216 "unmap": true, 00:17:18.216 "flush": true, 00:17:18.216 "reset": true, 00:17:18.216 "nvme_admin": false, 00:17:18.216 "nvme_io": false, 00:17:18.216 "nvme_io_md": false, 00:17:18.216 "write_zeroes": true, 00:17:18.216 "zcopy": true, 00:17:18.216 "get_zone_info": false, 00:17:18.216 "zone_management": false, 00:17:18.216 "zone_append": false, 00:17:18.216 "compare": false, 00:17:18.216 "compare_and_write": false, 00:17:18.216 "abort": true, 00:17:18.216 "seek_hole": false, 00:17:18.216 "seek_data": false, 00:17:18.216 "copy": true, 00:17:18.216 "nvme_iov_md": false 00:17:18.216 }, 00:17:18.216 "memory_domains": [ 00:17:18.216 { 00:17:18.216 "dma_device_id": "system", 00:17:18.216 "dma_device_type": 1 00:17:18.216 }, 00:17:18.216 { 00:17:18.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.216 "dma_device_type": 2 00:17:18.216 } 00:17:18.216 ], 00:17:18.216 "driver_specific": {} 00:17:18.216 } 00:17:18.216 ] 00:17:18.216 10:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:18.216 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:18.216 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:18.216 10:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:18.522 BaseBdev3 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.522 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:18.780 [ 00:17:18.780 { 00:17:18.780 "name": "BaseBdev3", 00:17:18.780 "aliases": [ 00:17:18.780 "96a3bf82-d20c-4075-a164-d0a9d49f7ae7" 00:17:18.780 ], 00:17:18.780 "product_name": "Malloc disk", 00:17:18.780 "block_size": 512, 00:17:18.780 "num_blocks": 65536, 00:17:18.780 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:18.780 "assigned_rate_limits": { 00:17:18.780 "rw_ios_per_sec": 0, 00:17:18.780 "rw_mbytes_per_sec": 0, 00:17:18.780 "r_mbytes_per_sec": 0, 00:17:18.780 "w_mbytes_per_sec": 0 00:17:18.780 }, 00:17:18.780 "claimed": false, 00:17:18.780 "zoned": false, 00:17:18.780 "supported_io_types": { 00:17:18.780 "read": true, 00:17:18.780 "write": true, 00:17:18.780 "unmap": true, 00:17:18.780 "flush": true, 00:17:18.780 "reset": true, 00:17:18.780 "nvme_admin": false, 00:17:18.780 "nvme_io": false, 00:17:18.780 "nvme_io_md": false, 00:17:18.780 "write_zeroes": true, 00:17:18.780 "zcopy": true, 00:17:18.780 "get_zone_info": false, 00:17:18.780 "zone_management": false, 00:17:18.780 "zone_append": false, 00:17:18.780 "compare": false, 00:17:18.780 "compare_and_write": false, 00:17:18.780 "abort": true, 00:17:18.780 "seek_hole": false, 00:17:18.780 "seek_data": false, 00:17:18.780 "copy": true, 00:17:18.780 "nvme_iov_md": false 00:17:18.780 }, 00:17:18.780 "memory_domains": [ 00:17:18.780 { 00:17:18.780 "dma_device_id": "system", 00:17:18.780 "dma_device_type": 1 00:17:18.780 }, 00:17:18.780 { 00:17:18.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.780 "dma_device_type": 2 00:17:18.780 } 00:17:18.780 ], 00:17:18.780 "driver_specific": {} 00:17:18.780 } 00:17:18.780 ] 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:18.780 BaseBdev4 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.780 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.037 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:19.037 [ 00:17:19.037 { 00:17:19.037 "name": "BaseBdev4", 00:17:19.037 "aliases": [ 00:17:19.037 "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c" 00:17:19.037 ], 00:17:19.037 "product_name": "Malloc disk", 00:17:19.037 "block_size": 512, 00:17:19.037 "num_blocks": 65536, 00:17:19.037 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:19.037 "assigned_rate_limits": { 00:17:19.037 "rw_ios_per_sec": 0, 00:17:19.037 "rw_mbytes_per_sec": 0, 00:17:19.037 "r_mbytes_per_sec": 0, 00:17:19.037 "w_mbytes_per_sec": 0 00:17:19.037 }, 00:17:19.037 "claimed": false, 00:17:19.037 "zoned": false, 00:17:19.037 "supported_io_types": { 00:17:19.037 "read": true, 00:17:19.037 "write": true, 00:17:19.037 "unmap": true, 00:17:19.037 "flush": true, 00:17:19.037 "reset": true, 00:17:19.037 "nvme_admin": false, 00:17:19.037 "nvme_io": false, 00:17:19.037 "nvme_io_md": false, 00:17:19.037 "write_zeroes": true, 00:17:19.037 "zcopy": true, 00:17:19.037 "get_zone_info": false, 00:17:19.037 "zone_management": false, 00:17:19.037 "zone_append": false, 00:17:19.037 "compare": false, 00:17:19.037 "compare_and_write": false, 00:17:19.037 "abort": true, 00:17:19.037 "seek_hole": false, 00:17:19.037 "seek_data": false, 00:17:19.037 "copy": true, 00:17:19.037 "nvme_iov_md": false 00:17:19.037 }, 00:17:19.037 "memory_domains": [ 00:17:19.037 { 00:17:19.037 "dma_device_id": "system", 00:17:19.037 "dma_device_type": 1 00:17:19.037 }, 00:17:19.037 { 00:17:19.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.037 "dma_device_type": 2 00:17:19.037 } 00:17:19.037 ], 00:17:19.037 "driver_specific": {} 00:17:19.037 } 00:17:19.037 ] 00:17:19.037 10:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:19.037 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:19.037 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:19.037 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:19.295 [2024-07-15 10:24:43.978116] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.295 [2024-07-15 10:24:43.978147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.295 [2024-07-15 10:24:43.978159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.295 [2024-07-15 10:24:43.979084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:19.295 [2024-07-15 10:24:43.979113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.295 10:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.554 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.554 "name": "Existed_Raid", 00:17:19.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.554 "strip_size_kb": 0, 00:17:19.554 "state": "configuring", 00:17:19.554 "raid_level": "raid1", 00:17:19.554 "superblock": false, 00:17:19.554 "num_base_bdevs": 4, 00:17:19.554 "num_base_bdevs_discovered": 3, 00:17:19.554 "num_base_bdevs_operational": 4, 00:17:19.554 "base_bdevs_list": [ 00:17:19.554 { 00:17:19.554 "name": "BaseBdev1", 00:17:19.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.554 "is_configured": false, 00:17:19.554 "data_offset": 0, 00:17:19.554 "data_size": 0 00:17:19.554 }, 00:17:19.554 { 00:17:19.554 "name": "BaseBdev2", 00:17:19.554 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:19.554 "is_configured": true, 00:17:19.554 "data_offset": 0, 00:17:19.554 "data_size": 65536 00:17:19.554 }, 00:17:19.554 { 00:17:19.554 "name": "BaseBdev3", 00:17:19.554 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:19.554 "is_configured": true, 00:17:19.554 "data_offset": 0, 00:17:19.554 "data_size": 65536 00:17:19.554 }, 00:17:19.554 { 00:17:19.554 "name": "BaseBdev4", 00:17:19.554 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:19.554 "is_configured": true, 00:17:19.554 "data_offset": 0, 00:17:19.554 "data_size": 65536 00:17:19.554 } 00:17:19.554 ] 00:17:19.554 }' 00:17:19.554 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.554 10:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:20.120 [2024-07-15 10:24:44.780202] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.120 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.121 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.121 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.121 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.121 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.379 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.379 "name": "Existed_Raid", 00:17:20.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.379 "strip_size_kb": 0, 00:17:20.379 "state": "configuring", 00:17:20.379 "raid_level": "raid1", 00:17:20.379 "superblock": false, 00:17:20.379 "num_base_bdevs": 4, 00:17:20.379 "num_base_bdevs_discovered": 2, 00:17:20.379 "num_base_bdevs_operational": 4, 00:17:20.379 "base_bdevs_list": [ 00:17:20.379 { 00:17:20.379 "name": "BaseBdev1", 00:17:20.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.379 "is_configured": false, 00:17:20.379 "data_offset": 0, 00:17:20.379 "data_size": 0 00:17:20.379 }, 00:17:20.379 { 00:17:20.379 "name": null, 00:17:20.379 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:20.379 "is_configured": false, 00:17:20.379 "data_offset": 0, 00:17:20.379 "data_size": 65536 00:17:20.379 }, 00:17:20.379 { 00:17:20.379 "name": "BaseBdev3", 00:17:20.379 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:20.379 "is_configured": true, 00:17:20.379 "data_offset": 0, 00:17:20.379 "data_size": 65536 00:17:20.379 }, 00:17:20.379 { 00:17:20.379 "name": "BaseBdev4", 00:17:20.379 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:20.379 "is_configured": true, 00:17:20.379 "data_offset": 0, 00:17:20.379 "data_size": 65536 00:17:20.379 } 00:17:20.379 ] 00:17:20.379 }' 00:17:20.379 10:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.379 10:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.946 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:20.946 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.946 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:20.946 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.204 [2024-07-15 10:24:45.797496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.204 BaseBdev1 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.204 10:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:21.463 [ 00:17:21.463 { 00:17:21.463 "name": "BaseBdev1", 00:17:21.463 "aliases": [ 00:17:21.463 "9609283e-3461-4a96-a362-8209af133a2a" 00:17:21.463 ], 00:17:21.463 "product_name": "Malloc disk", 00:17:21.463 "block_size": 512, 00:17:21.463 "num_blocks": 65536, 00:17:21.463 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:21.463 "assigned_rate_limits": { 00:17:21.463 "rw_ios_per_sec": 0, 00:17:21.463 "rw_mbytes_per_sec": 0, 00:17:21.463 "r_mbytes_per_sec": 0, 00:17:21.463 "w_mbytes_per_sec": 0 00:17:21.463 }, 00:17:21.463 "claimed": true, 00:17:21.463 "claim_type": "exclusive_write", 00:17:21.463 "zoned": false, 00:17:21.463 "supported_io_types": { 00:17:21.463 "read": true, 00:17:21.463 "write": true, 00:17:21.463 "unmap": true, 00:17:21.463 "flush": true, 00:17:21.463 "reset": true, 00:17:21.463 "nvme_admin": false, 00:17:21.463 "nvme_io": false, 00:17:21.463 "nvme_io_md": false, 00:17:21.463 "write_zeroes": true, 00:17:21.463 "zcopy": true, 00:17:21.463 "get_zone_info": false, 00:17:21.463 "zone_management": false, 00:17:21.463 "zone_append": false, 00:17:21.463 "compare": false, 00:17:21.463 "compare_and_write": false, 00:17:21.463 "abort": true, 00:17:21.463 "seek_hole": false, 00:17:21.463 "seek_data": false, 00:17:21.463 "copy": true, 00:17:21.463 "nvme_iov_md": false 00:17:21.463 }, 00:17:21.463 "memory_domains": [ 00:17:21.463 { 00:17:21.463 "dma_device_id": "system", 00:17:21.463 "dma_device_type": 1 00:17:21.463 }, 00:17:21.463 { 00:17:21.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.463 "dma_device_type": 2 00:17:21.463 } 00:17:21.463 ], 00:17:21.463 "driver_specific": {} 00:17:21.463 } 00:17:21.463 ] 00:17:21.463 10:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:21.463 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:21.463 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.463 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.463 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.463 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.464 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.722 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.722 "name": "Existed_Raid", 00:17:21.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.722 "strip_size_kb": 0, 00:17:21.722 "state": "configuring", 00:17:21.722 "raid_level": "raid1", 00:17:21.722 "superblock": false, 00:17:21.722 "num_base_bdevs": 4, 00:17:21.722 "num_base_bdevs_discovered": 3, 00:17:21.722 "num_base_bdevs_operational": 4, 00:17:21.722 "base_bdevs_list": [ 00:17:21.722 { 00:17:21.722 "name": "BaseBdev1", 00:17:21.722 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:21.722 "is_configured": true, 00:17:21.722 "data_offset": 0, 00:17:21.722 "data_size": 65536 00:17:21.722 }, 00:17:21.722 { 00:17:21.722 "name": null, 00:17:21.722 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:21.722 "is_configured": false, 00:17:21.722 "data_offset": 0, 00:17:21.722 "data_size": 65536 00:17:21.722 }, 00:17:21.722 { 00:17:21.722 "name": "BaseBdev3", 00:17:21.722 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:21.722 "is_configured": true, 00:17:21.722 "data_offset": 0, 00:17:21.722 "data_size": 65536 00:17:21.722 }, 00:17:21.722 { 00:17:21.722 "name": "BaseBdev4", 00:17:21.722 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:21.722 "is_configured": true, 00:17:21.722 "data_offset": 0, 00:17:21.722 "data_size": 65536 00:17:21.722 } 00:17:21.722 ] 00:17:21.722 }' 00:17:21.722 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.722 10:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.289 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.289 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:22.289 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:22.289 10:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:22.548 [2024-07-15 10:24:47.125038] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.548 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.548 "name": "Existed_Raid", 00:17:22.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.548 "strip_size_kb": 0, 00:17:22.548 "state": "configuring", 00:17:22.548 "raid_level": "raid1", 00:17:22.548 "superblock": false, 00:17:22.548 "num_base_bdevs": 4, 00:17:22.548 "num_base_bdevs_discovered": 2, 00:17:22.548 "num_base_bdevs_operational": 4, 00:17:22.548 "base_bdevs_list": [ 00:17:22.548 { 00:17:22.548 "name": "BaseBdev1", 00:17:22.548 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:22.548 "is_configured": true, 00:17:22.548 "data_offset": 0, 00:17:22.548 "data_size": 65536 00:17:22.549 }, 00:17:22.549 { 00:17:22.549 "name": null, 00:17:22.549 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:22.549 "is_configured": false, 00:17:22.549 "data_offset": 0, 00:17:22.549 "data_size": 65536 00:17:22.549 }, 00:17:22.549 { 00:17:22.549 "name": null, 00:17:22.549 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:22.549 "is_configured": false, 00:17:22.549 "data_offset": 0, 00:17:22.549 "data_size": 65536 00:17:22.549 }, 00:17:22.549 { 00:17:22.549 "name": "BaseBdev4", 00:17:22.549 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:22.549 "is_configured": true, 00:17:22.549 "data_offset": 0, 00:17:22.549 "data_size": 65536 00:17:22.549 } 00:17:22.549 ] 00:17:22.549 }' 00:17:22.549 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.549 10:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.116 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.116 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:23.375 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:23.375 10:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:23.375 [2024-07-15 10:24:48.139658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.375 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.634 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.634 "name": "Existed_Raid", 00:17:23.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.634 "strip_size_kb": 0, 00:17:23.634 "state": "configuring", 00:17:23.634 "raid_level": "raid1", 00:17:23.634 "superblock": false, 00:17:23.634 "num_base_bdevs": 4, 00:17:23.634 "num_base_bdevs_discovered": 3, 00:17:23.634 "num_base_bdevs_operational": 4, 00:17:23.634 "base_bdevs_list": [ 00:17:23.634 { 00:17:23.634 "name": "BaseBdev1", 00:17:23.634 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:23.634 "is_configured": true, 00:17:23.634 "data_offset": 0, 00:17:23.634 "data_size": 65536 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "name": null, 00:17:23.634 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:23.634 "is_configured": false, 00:17:23.634 "data_offset": 0, 00:17:23.634 "data_size": 65536 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "name": "BaseBdev3", 00:17:23.634 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:23.634 "is_configured": true, 00:17:23.634 "data_offset": 0, 00:17:23.634 "data_size": 65536 00:17:23.634 }, 00:17:23.634 { 00:17:23.634 "name": "BaseBdev4", 00:17:23.634 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:23.634 "is_configured": true, 00:17:23.634 "data_offset": 0, 00:17:23.634 "data_size": 65536 00:17:23.634 } 00:17:23.634 ] 00:17:23.634 }' 00:17:23.634 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.634 10:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.202 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.202 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:24.460 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:24.460 10:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:24.460 [2024-07-15 10:24:49.146263] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.460 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.718 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.718 "name": "Existed_Raid", 00:17:24.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.718 "strip_size_kb": 0, 00:17:24.718 "state": "configuring", 00:17:24.718 "raid_level": "raid1", 00:17:24.718 "superblock": false, 00:17:24.718 "num_base_bdevs": 4, 00:17:24.718 "num_base_bdevs_discovered": 2, 00:17:24.718 "num_base_bdevs_operational": 4, 00:17:24.719 "base_bdevs_list": [ 00:17:24.719 { 00:17:24.719 "name": null, 00:17:24.719 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:24.719 "is_configured": false, 00:17:24.719 "data_offset": 0, 00:17:24.719 "data_size": 65536 00:17:24.719 }, 00:17:24.719 { 00:17:24.719 "name": null, 00:17:24.719 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:24.719 "is_configured": false, 00:17:24.719 "data_offset": 0, 00:17:24.719 "data_size": 65536 00:17:24.719 }, 00:17:24.719 { 00:17:24.719 "name": "BaseBdev3", 00:17:24.719 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:24.719 "is_configured": true, 00:17:24.719 "data_offset": 0, 00:17:24.719 "data_size": 65536 00:17:24.719 }, 00:17:24.719 { 00:17:24.719 "name": "BaseBdev4", 00:17:24.719 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:24.719 "is_configured": true, 00:17:24.719 "data_offset": 0, 00:17:24.719 "data_size": 65536 00:17:24.719 } 00:17:24.719 ] 00:17:24.719 }' 00:17:24.719 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.719 10:24:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.285 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:25.285 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.285 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:25.285 10:24:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:25.543 [2024-07-15 10:24:50.146467] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.543 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.802 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.802 "name": "Existed_Raid", 00:17:25.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.802 "strip_size_kb": 0, 00:17:25.802 "state": "configuring", 00:17:25.802 "raid_level": "raid1", 00:17:25.802 "superblock": false, 00:17:25.802 "num_base_bdevs": 4, 00:17:25.802 "num_base_bdevs_discovered": 3, 00:17:25.802 "num_base_bdevs_operational": 4, 00:17:25.802 "base_bdevs_list": [ 00:17:25.802 { 00:17:25.802 "name": null, 00:17:25.802 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:25.802 "is_configured": false, 00:17:25.802 "data_offset": 0, 00:17:25.802 "data_size": 65536 00:17:25.802 }, 00:17:25.802 { 00:17:25.802 "name": "BaseBdev2", 00:17:25.802 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:25.802 "is_configured": true, 00:17:25.802 "data_offset": 0, 00:17:25.802 "data_size": 65536 00:17:25.802 }, 00:17:25.802 { 00:17:25.802 "name": "BaseBdev3", 00:17:25.802 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:25.802 "is_configured": true, 00:17:25.802 "data_offset": 0, 00:17:25.802 "data_size": 65536 00:17:25.802 }, 00:17:25.802 { 00:17:25.802 "name": "BaseBdev4", 00:17:25.802 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:25.802 "is_configured": true, 00:17:25.802 "data_offset": 0, 00:17:25.802 "data_size": 65536 00:17:25.802 } 00:17:25.802 ] 00:17:25.802 }' 00:17:25.802 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.802 10:24:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.059 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.059 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:26.316 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:26.316 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.316 10:24:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9609283e-3461-4a96-a362-8209af133a2a 00:17:26.574 [2024-07-15 10:24:51.304037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:26.574 [2024-07-15 10:24:51.304063] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11d5d30 00:17:26.574 [2024-07-15 10:24:51.304073] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:26.574 [2024-07-15 10:24:51.304198] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x138ee30 00:17:26.574 [2024-07-15 10:24:51.304280] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11d5d30 00:17:26.574 [2024-07-15 10:24:51.304286] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11d5d30 00:17:26.574 [2024-07-15 10:24:51.304411] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.574 NewBaseBdev 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:26.574 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:26.833 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:27.092 [ 00:17:27.092 { 00:17:27.092 "name": "NewBaseBdev", 00:17:27.092 "aliases": [ 00:17:27.092 "9609283e-3461-4a96-a362-8209af133a2a" 00:17:27.092 ], 00:17:27.092 "product_name": "Malloc disk", 00:17:27.092 "block_size": 512, 00:17:27.092 "num_blocks": 65536, 00:17:27.092 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:27.092 "assigned_rate_limits": { 00:17:27.092 "rw_ios_per_sec": 0, 00:17:27.092 "rw_mbytes_per_sec": 0, 00:17:27.092 "r_mbytes_per_sec": 0, 00:17:27.092 "w_mbytes_per_sec": 0 00:17:27.092 }, 00:17:27.092 "claimed": true, 00:17:27.092 "claim_type": "exclusive_write", 00:17:27.092 "zoned": false, 00:17:27.092 "supported_io_types": { 00:17:27.092 "read": true, 00:17:27.092 "write": true, 00:17:27.092 "unmap": true, 00:17:27.092 "flush": true, 00:17:27.093 "reset": true, 00:17:27.093 "nvme_admin": false, 00:17:27.093 "nvme_io": false, 00:17:27.093 "nvme_io_md": false, 00:17:27.093 "write_zeroes": true, 00:17:27.093 "zcopy": true, 00:17:27.093 "get_zone_info": false, 00:17:27.093 "zone_management": false, 00:17:27.093 "zone_append": false, 00:17:27.093 "compare": false, 00:17:27.093 "compare_and_write": false, 00:17:27.093 "abort": true, 00:17:27.093 "seek_hole": false, 00:17:27.093 "seek_data": false, 00:17:27.093 "copy": true, 00:17:27.093 "nvme_iov_md": false 00:17:27.093 }, 00:17:27.093 "memory_domains": [ 00:17:27.093 { 00:17:27.093 "dma_device_id": "system", 00:17:27.093 "dma_device_type": 1 00:17:27.093 }, 00:17:27.093 { 00:17:27.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.093 "dma_device_type": 2 00:17:27.093 } 00:17:27.093 ], 00:17:27.093 "driver_specific": {} 00:17:27.093 } 00:17:27.093 ] 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.093 "name": "Existed_Raid", 00:17:27.093 "uuid": "0040dc7f-c479-4b60-8198-1d2260c6960a", 00:17:27.093 "strip_size_kb": 0, 00:17:27.093 "state": "online", 00:17:27.093 "raid_level": "raid1", 00:17:27.093 "superblock": false, 00:17:27.093 "num_base_bdevs": 4, 00:17:27.093 "num_base_bdevs_discovered": 4, 00:17:27.093 "num_base_bdevs_operational": 4, 00:17:27.093 "base_bdevs_list": [ 00:17:27.093 { 00:17:27.093 "name": "NewBaseBdev", 00:17:27.093 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:27.093 "is_configured": true, 00:17:27.093 "data_offset": 0, 00:17:27.093 "data_size": 65536 00:17:27.093 }, 00:17:27.093 { 00:17:27.093 "name": "BaseBdev2", 00:17:27.093 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:27.093 "is_configured": true, 00:17:27.093 "data_offset": 0, 00:17:27.093 "data_size": 65536 00:17:27.093 }, 00:17:27.093 { 00:17:27.093 "name": "BaseBdev3", 00:17:27.093 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:27.093 "is_configured": true, 00:17:27.093 "data_offset": 0, 00:17:27.093 "data_size": 65536 00:17:27.093 }, 00:17:27.093 { 00:17:27.093 "name": "BaseBdev4", 00:17:27.093 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:27.093 "is_configured": true, 00:17:27.093 "data_offset": 0, 00:17:27.093 "data_size": 65536 00:17:27.093 } 00:17:27.093 ] 00:17:27.093 }' 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.093 10:24:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:27.661 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:27.920 [2024-07-15 10:24:52.475255] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.920 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:27.920 "name": "Existed_Raid", 00:17:27.920 "aliases": [ 00:17:27.920 "0040dc7f-c479-4b60-8198-1d2260c6960a" 00:17:27.920 ], 00:17:27.920 "product_name": "Raid Volume", 00:17:27.920 "block_size": 512, 00:17:27.920 "num_blocks": 65536, 00:17:27.920 "uuid": "0040dc7f-c479-4b60-8198-1d2260c6960a", 00:17:27.920 "assigned_rate_limits": { 00:17:27.920 "rw_ios_per_sec": 0, 00:17:27.920 "rw_mbytes_per_sec": 0, 00:17:27.920 "r_mbytes_per_sec": 0, 00:17:27.920 "w_mbytes_per_sec": 0 00:17:27.920 }, 00:17:27.920 "claimed": false, 00:17:27.920 "zoned": false, 00:17:27.920 "supported_io_types": { 00:17:27.920 "read": true, 00:17:27.920 "write": true, 00:17:27.920 "unmap": false, 00:17:27.920 "flush": false, 00:17:27.920 "reset": true, 00:17:27.920 "nvme_admin": false, 00:17:27.920 "nvme_io": false, 00:17:27.920 "nvme_io_md": false, 00:17:27.920 "write_zeroes": true, 00:17:27.920 "zcopy": false, 00:17:27.920 "get_zone_info": false, 00:17:27.920 "zone_management": false, 00:17:27.920 "zone_append": false, 00:17:27.920 "compare": false, 00:17:27.920 "compare_and_write": false, 00:17:27.920 "abort": false, 00:17:27.920 "seek_hole": false, 00:17:27.920 "seek_data": false, 00:17:27.920 "copy": false, 00:17:27.920 "nvme_iov_md": false 00:17:27.920 }, 00:17:27.920 "memory_domains": [ 00:17:27.920 { 00:17:27.920 "dma_device_id": "system", 00:17:27.920 "dma_device_type": 1 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.920 "dma_device_type": 2 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "system", 00:17:27.920 "dma_device_type": 1 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.920 "dma_device_type": 2 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "system", 00:17:27.920 "dma_device_type": 1 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.920 "dma_device_type": 2 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "system", 00:17:27.920 "dma_device_type": 1 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.920 "dma_device_type": 2 00:17:27.920 } 00:17:27.920 ], 00:17:27.920 "driver_specific": { 00:17:27.920 "raid": { 00:17:27.920 "uuid": "0040dc7f-c479-4b60-8198-1d2260c6960a", 00:17:27.920 "strip_size_kb": 0, 00:17:27.920 "state": "online", 00:17:27.920 "raid_level": "raid1", 00:17:27.920 "superblock": false, 00:17:27.920 "num_base_bdevs": 4, 00:17:27.920 "num_base_bdevs_discovered": 4, 00:17:27.920 "num_base_bdevs_operational": 4, 00:17:27.920 "base_bdevs_list": [ 00:17:27.920 { 00:17:27.920 "name": "NewBaseBdev", 00:17:27.920 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:27.920 "is_configured": true, 00:17:27.920 "data_offset": 0, 00:17:27.920 "data_size": 65536 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "name": "BaseBdev2", 00:17:27.920 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:27.920 "is_configured": true, 00:17:27.920 "data_offset": 0, 00:17:27.920 "data_size": 65536 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "name": "BaseBdev3", 00:17:27.920 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:27.920 "is_configured": true, 00:17:27.920 "data_offset": 0, 00:17:27.920 "data_size": 65536 00:17:27.920 }, 00:17:27.920 { 00:17:27.920 "name": "BaseBdev4", 00:17:27.920 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:27.920 "is_configured": true, 00:17:27.920 "data_offset": 0, 00:17:27.921 "data_size": 65536 00:17:27.921 } 00:17:27.921 ] 00:17:27.921 } 00:17:27.921 } 00:17:27.921 }' 00:17:27.921 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:27.921 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:27.921 BaseBdev2 00:17:27.921 BaseBdev3 00:17:27.921 BaseBdev4' 00:17:27.921 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.921 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.921 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:27.921 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.921 "name": "NewBaseBdev", 00:17:27.921 "aliases": [ 00:17:27.921 "9609283e-3461-4a96-a362-8209af133a2a" 00:17:27.921 ], 00:17:27.921 "product_name": "Malloc disk", 00:17:27.921 "block_size": 512, 00:17:27.921 "num_blocks": 65536, 00:17:27.921 "uuid": "9609283e-3461-4a96-a362-8209af133a2a", 00:17:27.921 "assigned_rate_limits": { 00:17:27.921 "rw_ios_per_sec": 0, 00:17:27.921 "rw_mbytes_per_sec": 0, 00:17:27.921 "r_mbytes_per_sec": 0, 00:17:27.921 "w_mbytes_per_sec": 0 00:17:27.921 }, 00:17:27.921 "claimed": true, 00:17:27.921 "claim_type": "exclusive_write", 00:17:27.921 "zoned": false, 00:17:27.921 "supported_io_types": { 00:17:27.921 "read": true, 00:17:27.921 "write": true, 00:17:27.921 "unmap": true, 00:17:27.921 "flush": true, 00:17:27.921 "reset": true, 00:17:27.921 "nvme_admin": false, 00:17:27.921 "nvme_io": false, 00:17:27.921 "nvme_io_md": false, 00:17:27.921 "write_zeroes": true, 00:17:27.921 "zcopy": true, 00:17:27.921 "get_zone_info": false, 00:17:27.921 "zone_management": false, 00:17:27.921 "zone_append": false, 00:17:27.921 "compare": false, 00:17:27.921 "compare_and_write": false, 00:17:27.921 "abort": true, 00:17:27.921 "seek_hole": false, 00:17:27.921 "seek_data": false, 00:17:27.921 "copy": true, 00:17:27.921 "nvme_iov_md": false 00:17:27.921 }, 00:17:27.921 "memory_domains": [ 00:17:27.921 { 00:17:27.921 "dma_device_id": "system", 00:17:27.921 "dma_device_type": 1 00:17:27.921 }, 00:17:27.921 { 00:17:27.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.921 "dma_device_type": 2 00:17:27.921 } 00:17:27.921 ], 00:17:27.921 "driver_specific": {} 00:17:27.921 }' 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.180 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.439 10:24:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.439 "name": "BaseBdev2", 00:17:28.439 "aliases": [ 00:17:28.439 "0de61f68-2bcd-4920-a525-99a8a0b51fe2" 00:17:28.439 ], 00:17:28.439 "product_name": "Malloc disk", 00:17:28.439 "block_size": 512, 00:17:28.439 "num_blocks": 65536, 00:17:28.439 "uuid": "0de61f68-2bcd-4920-a525-99a8a0b51fe2", 00:17:28.439 "assigned_rate_limits": { 00:17:28.439 "rw_ios_per_sec": 0, 00:17:28.439 "rw_mbytes_per_sec": 0, 00:17:28.439 "r_mbytes_per_sec": 0, 00:17:28.439 "w_mbytes_per_sec": 0 00:17:28.439 }, 00:17:28.439 "claimed": true, 00:17:28.439 "claim_type": "exclusive_write", 00:17:28.439 "zoned": false, 00:17:28.439 "supported_io_types": { 00:17:28.439 "read": true, 00:17:28.439 "write": true, 00:17:28.439 "unmap": true, 00:17:28.439 "flush": true, 00:17:28.439 "reset": true, 00:17:28.439 "nvme_admin": false, 00:17:28.439 "nvme_io": false, 00:17:28.439 "nvme_io_md": false, 00:17:28.439 "write_zeroes": true, 00:17:28.439 "zcopy": true, 00:17:28.439 "get_zone_info": false, 00:17:28.439 "zone_management": false, 00:17:28.439 "zone_append": false, 00:17:28.439 "compare": false, 00:17:28.439 "compare_and_write": false, 00:17:28.439 "abort": true, 00:17:28.439 "seek_hole": false, 00:17:28.439 "seek_data": false, 00:17:28.439 "copy": true, 00:17:28.439 "nvme_iov_md": false 00:17:28.439 }, 00:17:28.439 "memory_domains": [ 00:17:28.439 { 00:17:28.439 "dma_device_id": "system", 00:17:28.439 "dma_device_type": 1 00:17:28.439 }, 00:17:28.439 { 00:17:28.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.439 "dma_device_type": 2 00:17:28.439 } 00:17:28.439 ], 00:17:28.439 "driver_specific": {} 00:17:28.439 }' 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.439 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.697 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.956 "name": "BaseBdev3", 00:17:28.956 "aliases": [ 00:17:28.956 "96a3bf82-d20c-4075-a164-d0a9d49f7ae7" 00:17:28.956 ], 00:17:28.956 "product_name": "Malloc disk", 00:17:28.956 "block_size": 512, 00:17:28.956 "num_blocks": 65536, 00:17:28.956 "uuid": "96a3bf82-d20c-4075-a164-d0a9d49f7ae7", 00:17:28.956 "assigned_rate_limits": { 00:17:28.956 "rw_ios_per_sec": 0, 00:17:28.956 "rw_mbytes_per_sec": 0, 00:17:28.956 "r_mbytes_per_sec": 0, 00:17:28.956 "w_mbytes_per_sec": 0 00:17:28.956 }, 00:17:28.956 "claimed": true, 00:17:28.956 "claim_type": "exclusive_write", 00:17:28.956 "zoned": false, 00:17:28.956 "supported_io_types": { 00:17:28.956 "read": true, 00:17:28.956 "write": true, 00:17:28.956 "unmap": true, 00:17:28.956 "flush": true, 00:17:28.956 "reset": true, 00:17:28.956 "nvme_admin": false, 00:17:28.956 "nvme_io": false, 00:17:28.956 "nvme_io_md": false, 00:17:28.956 "write_zeroes": true, 00:17:28.956 "zcopy": true, 00:17:28.956 "get_zone_info": false, 00:17:28.956 "zone_management": false, 00:17:28.956 "zone_append": false, 00:17:28.956 "compare": false, 00:17:28.956 "compare_and_write": false, 00:17:28.956 "abort": true, 00:17:28.956 "seek_hole": false, 00:17:28.956 "seek_data": false, 00:17:28.956 "copy": true, 00:17:28.956 "nvme_iov_md": false 00:17:28.956 }, 00:17:28.956 "memory_domains": [ 00:17:28.956 { 00:17:28.956 "dma_device_id": "system", 00:17:28.956 "dma_device_type": 1 00:17:28.956 }, 00:17:28.956 { 00:17:28.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.956 "dma_device_type": 2 00:17:28.956 } 00:17:28.956 ], 00:17:28.956 "driver_specific": {} 00:17:28.956 }' 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.956 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:29.215 10:24:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:29.473 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:29.473 "name": "BaseBdev4", 00:17:29.473 "aliases": [ 00:17:29.473 "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c" 00:17:29.473 ], 00:17:29.473 "product_name": "Malloc disk", 00:17:29.473 "block_size": 512, 00:17:29.473 "num_blocks": 65536, 00:17:29.473 "uuid": "5ef7f16f-a7b3-4b8b-9c30-f6ceccc6c65c", 00:17:29.473 "assigned_rate_limits": { 00:17:29.473 "rw_ios_per_sec": 0, 00:17:29.473 "rw_mbytes_per_sec": 0, 00:17:29.473 "r_mbytes_per_sec": 0, 00:17:29.473 "w_mbytes_per_sec": 0 00:17:29.473 }, 00:17:29.473 "claimed": true, 00:17:29.473 "claim_type": "exclusive_write", 00:17:29.473 "zoned": false, 00:17:29.473 "supported_io_types": { 00:17:29.473 "read": true, 00:17:29.473 "write": true, 00:17:29.473 "unmap": true, 00:17:29.473 "flush": true, 00:17:29.473 "reset": true, 00:17:29.473 "nvme_admin": false, 00:17:29.473 "nvme_io": false, 00:17:29.473 "nvme_io_md": false, 00:17:29.473 "write_zeroes": true, 00:17:29.473 "zcopy": true, 00:17:29.473 "get_zone_info": false, 00:17:29.473 "zone_management": false, 00:17:29.473 "zone_append": false, 00:17:29.473 "compare": false, 00:17:29.473 "compare_and_write": false, 00:17:29.473 "abort": true, 00:17:29.473 "seek_hole": false, 00:17:29.473 "seek_data": false, 00:17:29.473 "copy": true, 00:17:29.473 "nvme_iov_md": false 00:17:29.473 }, 00:17:29.473 "memory_domains": [ 00:17:29.473 { 00:17:29.473 "dma_device_id": "system", 00:17:29.473 "dma_device_type": 1 00:17:29.473 }, 00:17:29.473 { 00:17:29.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.473 "dma_device_type": 2 00:17:29.474 } 00:17:29.474 ], 00:17:29.474 "driver_specific": {} 00:17:29.474 }' 00:17:29.474 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.474 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:29.474 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:29.474 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.474 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:29.732 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.992 [2024-07-15 10:24:54.596512] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.992 [2024-07-15 10:24:54.596533] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.992 [2024-07-15 10:24:54.596569] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.992 [2024-07-15 10:24:54.596754] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.992 [2024-07-15 10:24:54.596762] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11d5d30 name Existed_Raid, state offline 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1832957 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 1832957 ']' 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 1832957 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832957 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832957' 00:17:29.992 killing process with pid 1832957 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 1832957 00:17:29.992 [2024-07-15 10:24:54.664277] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.992 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 1832957 00:17:29.992 [2024-07-15 10:24:54.693326] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:30.252 00:17:30.252 real 0m24.186s 00:17:30.252 user 0m44.189s 00:17:30.252 sys 0m4.621s 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.252 ************************************ 00:17:30.252 END TEST raid_state_function_test 00:17:30.252 ************************************ 00:17:30.252 10:24:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:30.252 10:24:54 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:30.252 10:24:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:30.252 10:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.252 10:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.252 ************************************ 00:17:30.252 START TEST raid_state_function_test_sb 00:17:30.252 ************************************ 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1837631 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1837631' 00:17:30.252 Process raid pid: 1837631 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1837631 /var/tmp/spdk-raid.sock 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1837631 ']' 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.252 10:24:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.252 [2024-07-15 10:24:55.005499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:30.252 [2024-07-15 10:24:55.005543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.512 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:30.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.513 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:30.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.513 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:30.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.513 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:30.513 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.513 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:30.513 [2024-07-15 10:24:55.094810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.513 [2024-07-15 10:24:55.167678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.513 [2024-07-15 10:24:55.221500] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.513 [2024-07-15 10:24:55.221528] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.144 10:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.144 10:24:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:31.144 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:31.423 [2024-07-15 10:24:55.952453] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.423 [2024-07-15 10:24:55.952489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.423 [2024-07-15 10:24:55.952498] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.423 [2024-07-15 10:24:55.952509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.423 [2024-07-15 10:24:55.952516] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:31.423 [2024-07-15 10:24:55.952525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:31.423 [2024-07-15 10:24:55.952532] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:31.423 [2024-07-15 10:24:55.952541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.423 10:24:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.423 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.423 "name": "Existed_Raid", 00:17:31.423 "uuid": "e7303314-6636-4455-a468-0546a6274b0a", 00:17:31.423 "strip_size_kb": 0, 00:17:31.423 "state": "configuring", 00:17:31.423 "raid_level": "raid1", 00:17:31.423 "superblock": true, 00:17:31.423 "num_base_bdevs": 4, 00:17:31.423 "num_base_bdevs_discovered": 0, 00:17:31.423 "num_base_bdevs_operational": 4, 00:17:31.423 "base_bdevs_list": [ 00:17:31.423 { 00:17:31.423 "name": "BaseBdev1", 00:17:31.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.423 "is_configured": false, 00:17:31.423 "data_offset": 0, 00:17:31.423 "data_size": 0 00:17:31.423 }, 00:17:31.423 { 00:17:31.423 "name": "BaseBdev2", 00:17:31.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.423 "is_configured": false, 00:17:31.423 "data_offset": 0, 00:17:31.423 "data_size": 0 00:17:31.423 }, 00:17:31.423 { 00:17:31.423 "name": "BaseBdev3", 00:17:31.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.423 "is_configured": false, 00:17:31.423 "data_offset": 0, 00:17:31.423 "data_size": 0 00:17:31.423 }, 00:17:31.423 { 00:17:31.423 "name": "BaseBdev4", 00:17:31.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.423 "is_configured": false, 00:17:31.423 "data_offset": 0, 00:17:31.423 "data_size": 0 00:17:31.423 } 00:17:31.423 ] 00:17:31.423 }' 00:17:31.423 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.423 10:24:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.989 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:31.989 [2024-07-15 10:24:56.770470] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.989 [2024-07-15 10:24:56.770493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb6f60 name Existed_Raid, state configuring 00:17:32.247 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:32.247 [2024-07-15 10:24:56.938924] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.247 [2024-07-15 10:24:56.938947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.247 [2024-07-15 10:24:56.938956] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.247 [2024-07-15 10:24:56.938965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.247 [2024-07-15 10:24:56.938972] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.247 [2024-07-15 10:24:56.938982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.247 [2024-07-15 10:24:56.938989] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:32.247 [2024-07-15 10:24:56.939000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:32.247 10:24:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:32.505 [2024-07-15 10:24:57.099784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.505 BaseBdev1 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:32.505 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:32.763 [ 00:17:32.763 { 00:17:32.763 "name": "BaseBdev1", 00:17:32.763 "aliases": [ 00:17:32.763 "230670c9-f6cc-4724-ab28-ade79b06a914" 00:17:32.763 ], 00:17:32.763 "product_name": "Malloc disk", 00:17:32.763 "block_size": 512, 00:17:32.763 "num_blocks": 65536, 00:17:32.763 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:32.763 "assigned_rate_limits": { 00:17:32.763 "rw_ios_per_sec": 0, 00:17:32.763 "rw_mbytes_per_sec": 0, 00:17:32.763 "r_mbytes_per_sec": 0, 00:17:32.763 "w_mbytes_per_sec": 0 00:17:32.763 }, 00:17:32.763 "claimed": true, 00:17:32.763 "claim_type": "exclusive_write", 00:17:32.763 "zoned": false, 00:17:32.763 "supported_io_types": { 00:17:32.763 "read": true, 00:17:32.763 "write": true, 00:17:32.763 "unmap": true, 00:17:32.763 "flush": true, 00:17:32.763 "reset": true, 00:17:32.763 "nvme_admin": false, 00:17:32.763 "nvme_io": false, 00:17:32.763 "nvme_io_md": false, 00:17:32.763 "write_zeroes": true, 00:17:32.763 "zcopy": true, 00:17:32.763 "get_zone_info": false, 00:17:32.763 "zone_management": false, 00:17:32.763 "zone_append": false, 00:17:32.763 "compare": false, 00:17:32.763 "compare_and_write": false, 00:17:32.763 "abort": true, 00:17:32.763 "seek_hole": false, 00:17:32.763 "seek_data": false, 00:17:32.763 "copy": true, 00:17:32.763 "nvme_iov_md": false 00:17:32.763 }, 00:17:32.763 "memory_domains": [ 00:17:32.763 { 00:17:32.763 "dma_device_id": "system", 00:17:32.763 "dma_device_type": 1 00:17:32.763 }, 00:17:32.763 { 00:17:32.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.763 "dma_device_type": 2 00:17:32.763 } 00:17:32.763 ], 00:17:32.763 "driver_specific": {} 00:17:32.763 } 00:17:32.763 ] 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.763 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.021 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.021 "name": "Existed_Raid", 00:17:33.021 "uuid": "e97547f3-0eee-4dc9-8eff-186a8706f4ac", 00:17:33.021 "strip_size_kb": 0, 00:17:33.021 "state": "configuring", 00:17:33.021 "raid_level": "raid1", 00:17:33.021 "superblock": true, 00:17:33.021 "num_base_bdevs": 4, 00:17:33.021 "num_base_bdevs_discovered": 1, 00:17:33.021 "num_base_bdevs_operational": 4, 00:17:33.021 "base_bdevs_list": [ 00:17:33.021 { 00:17:33.021 "name": "BaseBdev1", 00:17:33.021 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 2048, 00:17:33.021 "data_size": 63488 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev2", 00:17:33.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.021 "is_configured": false, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 0 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev3", 00:17:33.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.021 "is_configured": false, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 0 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev4", 00:17:33.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.021 "is_configured": false, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 0 00:17:33.021 } 00:17:33.021 ] 00:17:33.021 }' 00:17:33.021 10:24:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.021 10:24:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.279 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.537 [2024-07-15 10:24:58.206627] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.537 [2024-07-15 10:24:58.206655] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb67d0 name Existed_Raid, state configuring 00:17:33.537 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:33.796 [2024-07-15 10:24:58.387128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.796 [2024-07-15 10:24:58.388215] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.796 [2024-07-15 10:24:58.388242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.796 [2024-07-15 10:24:58.388252] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.796 [2024-07-15 10:24:58.388262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.796 [2024-07-15 10:24:58.388269] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:33.796 [2024-07-15 10:24:58.388279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.796 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.055 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.055 "name": "Existed_Raid", 00:17:34.055 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:34.055 "strip_size_kb": 0, 00:17:34.055 "state": "configuring", 00:17:34.055 "raid_level": "raid1", 00:17:34.055 "superblock": true, 00:17:34.055 "num_base_bdevs": 4, 00:17:34.055 "num_base_bdevs_discovered": 1, 00:17:34.055 "num_base_bdevs_operational": 4, 00:17:34.055 "base_bdevs_list": [ 00:17:34.055 { 00:17:34.055 "name": "BaseBdev1", 00:17:34.055 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:34.055 "is_configured": true, 00:17:34.055 "data_offset": 2048, 00:17:34.055 "data_size": 63488 00:17:34.055 }, 00:17:34.055 { 00:17:34.055 "name": "BaseBdev2", 00:17:34.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.055 "is_configured": false, 00:17:34.055 "data_offset": 0, 00:17:34.055 "data_size": 0 00:17:34.055 }, 00:17:34.055 { 00:17:34.055 "name": "BaseBdev3", 00:17:34.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.055 "is_configured": false, 00:17:34.055 "data_offset": 0, 00:17:34.055 "data_size": 0 00:17:34.055 }, 00:17:34.055 { 00:17:34.055 "name": "BaseBdev4", 00:17:34.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.055 "is_configured": false, 00:17:34.055 "data_offset": 0, 00:17:34.055 "data_size": 0 00:17:34.055 } 00:17:34.055 ] 00:17:34.055 }' 00:17:34.055 10:24:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.055 10:24:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.314 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:34.572 [2024-07-15 10:24:59.219984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.572 BaseBdev2 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:34.572 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:34.830 [ 00:17:34.830 { 00:17:34.830 "name": "BaseBdev2", 00:17:34.830 "aliases": [ 00:17:34.830 "d207b54b-28b7-47ec-b0c7-86d113cc7a24" 00:17:34.830 ], 00:17:34.830 "product_name": "Malloc disk", 00:17:34.830 "block_size": 512, 00:17:34.830 "num_blocks": 65536, 00:17:34.830 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:34.830 "assigned_rate_limits": { 00:17:34.830 "rw_ios_per_sec": 0, 00:17:34.830 "rw_mbytes_per_sec": 0, 00:17:34.830 "r_mbytes_per_sec": 0, 00:17:34.830 "w_mbytes_per_sec": 0 00:17:34.830 }, 00:17:34.830 "claimed": true, 00:17:34.830 "claim_type": "exclusive_write", 00:17:34.830 "zoned": false, 00:17:34.830 "supported_io_types": { 00:17:34.830 "read": true, 00:17:34.830 "write": true, 00:17:34.830 "unmap": true, 00:17:34.830 "flush": true, 00:17:34.830 "reset": true, 00:17:34.830 "nvme_admin": false, 00:17:34.830 "nvme_io": false, 00:17:34.830 "nvme_io_md": false, 00:17:34.830 "write_zeroes": true, 00:17:34.830 "zcopy": true, 00:17:34.830 "get_zone_info": false, 00:17:34.830 "zone_management": false, 00:17:34.830 "zone_append": false, 00:17:34.830 "compare": false, 00:17:34.830 "compare_and_write": false, 00:17:34.830 "abort": true, 00:17:34.830 "seek_hole": false, 00:17:34.830 "seek_data": false, 00:17:34.830 "copy": true, 00:17:34.830 "nvme_iov_md": false 00:17:34.830 }, 00:17:34.830 "memory_domains": [ 00:17:34.830 { 00:17:34.830 "dma_device_id": "system", 00:17:34.830 "dma_device_type": 1 00:17:34.830 }, 00:17:34.830 { 00:17:34.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.830 "dma_device_type": 2 00:17:34.830 } 00:17:34.830 ], 00:17:34.830 "driver_specific": {} 00:17:34.830 } 00:17:34.830 ] 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.830 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.089 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.089 "name": "Existed_Raid", 00:17:35.089 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:35.089 "strip_size_kb": 0, 00:17:35.089 "state": "configuring", 00:17:35.089 "raid_level": "raid1", 00:17:35.089 "superblock": true, 00:17:35.089 "num_base_bdevs": 4, 00:17:35.089 "num_base_bdevs_discovered": 2, 00:17:35.089 "num_base_bdevs_operational": 4, 00:17:35.089 "base_bdevs_list": [ 00:17:35.089 { 00:17:35.089 "name": "BaseBdev1", 00:17:35.089 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:35.089 "is_configured": true, 00:17:35.089 "data_offset": 2048, 00:17:35.089 "data_size": 63488 00:17:35.089 }, 00:17:35.089 { 00:17:35.089 "name": "BaseBdev2", 00:17:35.089 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:35.089 "is_configured": true, 00:17:35.089 "data_offset": 2048, 00:17:35.089 "data_size": 63488 00:17:35.089 }, 00:17:35.089 { 00:17:35.089 "name": "BaseBdev3", 00:17:35.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.089 "is_configured": false, 00:17:35.089 "data_offset": 0, 00:17:35.089 "data_size": 0 00:17:35.089 }, 00:17:35.089 { 00:17:35.089 "name": "BaseBdev4", 00:17:35.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.089 "is_configured": false, 00:17:35.089 "data_offset": 0, 00:17:35.089 "data_size": 0 00:17:35.089 } 00:17:35.089 ] 00:17:35.089 }' 00:17:35.089 10:24:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.089 10:24:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:35.656 [2024-07-15 10:25:00.409858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:35.656 BaseBdev3 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:35.656 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:35.914 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:36.172 [ 00:17:36.172 { 00:17:36.172 "name": "BaseBdev3", 00:17:36.172 "aliases": [ 00:17:36.172 "0f5287cd-042b-49e5-9849-1d83f0447e00" 00:17:36.172 ], 00:17:36.172 "product_name": "Malloc disk", 00:17:36.172 "block_size": 512, 00:17:36.172 "num_blocks": 65536, 00:17:36.172 "uuid": "0f5287cd-042b-49e5-9849-1d83f0447e00", 00:17:36.172 "assigned_rate_limits": { 00:17:36.172 "rw_ios_per_sec": 0, 00:17:36.172 "rw_mbytes_per_sec": 0, 00:17:36.172 "r_mbytes_per_sec": 0, 00:17:36.172 "w_mbytes_per_sec": 0 00:17:36.172 }, 00:17:36.172 "claimed": true, 00:17:36.172 "claim_type": "exclusive_write", 00:17:36.172 "zoned": false, 00:17:36.172 "supported_io_types": { 00:17:36.172 "read": true, 00:17:36.172 "write": true, 00:17:36.172 "unmap": true, 00:17:36.172 "flush": true, 00:17:36.172 "reset": true, 00:17:36.172 "nvme_admin": false, 00:17:36.172 "nvme_io": false, 00:17:36.172 "nvme_io_md": false, 00:17:36.172 "write_zeroes": true, 00:17:36.172 "zcopy": true, 00:17:36.172 "get_zone_info": false, 00:17:36.172 "zone_management": false, 00:17:36.172 "zone_append": false, 00:17:36.172 "compare": false, 00:17:36.172 "compare_and_write": false, 00:17:36.172 "abort": true, 00:17:36.172 "seek_hole": false, 00:17:36.172 "seek_data": false, 00:17:36.172 "copy": true, 00:17:36.172 "nvme_iov_md": false 00:17:36.172 }, 00:17:36.172 "memory_domains": [ 00:17:36.172 { 00:17:36.172 "dma_device_id": "system", 00:17:36.172 "dma_device_type": 1 00:17:36.172 }, 00:17:36.172 { 00:17:36.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.172 "dma_device_type": 2 00:17:36.172 } 00:17:36.172 ], 00:17:36.172 "driver_specific": {} 00:17:36.172 } 00:17:36.172 ] 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.172 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.172 "name": "Existed_Raid", 00:17:36.172 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:36.172 "strip_size_kb": 0, 00:17:36.172 "state": "configuring", 00:17:36.172 "raid_level": "raid1", 00:17:36.172 "superblock": true, 00:17:36.172 "num_base_bdevs": 4, 00:17:36.172 "num_base_bdevs_discovered": 3, 00:17:36.172 "num_base_bdevs_operational": 4, 00:17:36.172 "base_bdevs_list": [ 00:17:36.172 { 00:17:36.172 "name": "BaseBdev1", 00:17:36.172 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:36.172 "is_configured": true, 00:17:36.172 "data_offset": 2048, 00:17:36.173 "data_size": 63488 00:17:36.173 }, 00:17:36.173 { 00:17:36.173 "name": "BaseBdev2", 00:17:36.173 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:36.173 "is_configured": true, 00:17:36.173 "data_offset": 2048, 00:17:36.173 "data_size": 63488 00:17:36.173 }, 00:17:36.173 { 00:17:36.173 "name": "BaseBdev3", 00:17:36.173 "uuid": "0f5287cd-042b-49e5-9849-1d83f0447e00", 00:17:36.173 "is_configured": true, 00:17:36.173 "data_offset": 2048, 00:17:36.173 "data_size": 63488 00:17:36.173 }, 00:17:36.173 { 00:17:36.173 "name": "BaseBdev4", 00:17:36.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.173 "is_configured": false, 00:17:36.173 "data_offset": 0, 00:17:36.173 "data_size": 0 00:17:36.173 } 00:17:36.173 ] 00:17:36.173 }' 00:17:36.173 10:25:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.173 10:25:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.738 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:36.997 [2024-07-15 10:25:01.591699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.997 [2024-07-15 10:25:01.591823] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfb7830 00:17:36.997 [2024-07-15 10:25:01.591833] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.997 [2024-07-15 10:25:01.591964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfae360 00:17:36.997 [2024-07-15 10:25:01.592057] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfb7830 00:17:36.997 [2024-07-15 10:25:01.592064] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfb7830 00:17:36.997 [2024-07-15 10:25:01.592129] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.997 BaseBdev4 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.997 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:37.257 [ 00:17:37.257 { 00:17:37.257 "name": "BaseBdev4", 00:17:37.257 "aliases": [ 00:17:37.257 "345e4ad3-8ff2-417d-99b7-d0835e94fa6a" 00:17:37.257 ], 00:17:37.257 "product_name": "Malloc disk", 00:17:37.257 "block_size": 512, 00:17:37.257 "num_blocks": 65536, 00:17:37.257 "uuid": "345e4ad3-8ff2-417d-99b7-d0835e94fa6a", 00:17:37.257 "assigned_rate_limits": { 00:17:37.257 "rw_ios_per_sec": 0, 00:17:37.257 "rw_mbytes_per_sec": 0, 00:17:37.257 "r_mbytes_per_sec": 0, 00:17:37.257 "w_mbytes_per_sec": 0 00:17:37.257 }, 00:17:37.257 "claimed": true, 00:17:37.257 "claim_type": "exclusive_write", 00:17:37.257 "zoned": false, 00:17:37.257 "supported_io_types": { 00:17:37.257 "read": true, 00:17:37.257 "write": true, 00:17:37.257 "unmap": true, 00:17:37.257 "flush": true, 00:17:37.257 "reset": true, 00:17:37.257 "nvme_admin": false, 00:17:37.257 "nvme_io": false, 00:17:37.257 "nvme_io_md": false, 00:17:37.257 "write_zeroes": true, 00:17:37.257 "zcopy": true, 00:17:37.257 "get_zone_info": false, 00:17:37.257 "zone_management": false, 00:17:37.257 "zone_append": false, 00:17:37.257 "compare": false, 00:17:37.257 "compare_and_write": false, 00:17:37.257 "abort": true, 00:17:37.257 "seek_hole": false, 00:17:37.257 "seek_data": false, 00:17:37.257 "copy": true, 00:17:37.257 "nvme_iov_md": false 00:17:37.257 }, 00:17:37.257 "memory_domains": [ 00:17:37.257 { 00:17:37.257 "dma_device_id": "system", 00:17:37.257 "dma_device_type": 1 00:17:37.257 }, 00:17:37.257 { 00:17:37.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.257 "dma_device_type": 2 00:17:37.257 } 00:17:37.257 ], 00:17:37.257 "driver_specific": {} 00:17:37.257 } 00:17:37.257 ] 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.257 10:25:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.515 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.515 "name": "Existed_Raid", 00:17:37.515 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:37.515 "strip_size_kb": 0, 00:17:37.515 "state": "online", 00:17:37.515 "raid_level": "raid1", 00:17:37.515 "superblock": true, 00:17:37.515 "num_base_bdevs": 4, 00:17:37.515 "num_base_bdevs_discovered": 4, 00:17:37.515 "num_base_bdevs_operational": 4, 00:17:37.515 "base_bdevs_list": [ 00:17:37.515 { 00:17:37.515 "name": "BaseBdev1", 00:17:37.515 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:37.515 "is_configured": true, 00:17:37.515 "data_offset": 2048, 00:17:37.515 "data_size": 63488 00:17:37.515 }, 00:17:37.515 { 00:17:37.515 "name": "BaseBdev2", 00:17:37.515 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:37.515 "is_configured": true, 00:17:37.515 "data_offset": 2048, 00:17:37.515 "data_size": 63488 00:17:37.515 }, 00:17:37.515 { 00:17:37.515 "name": "BaseBdev3", 00:17:37.515 "uuid": "0f5287cd-042b-49e5-9849-1d83f0447e00", 00:17:37.515 "is_configured": true, 00:17:37.515 "data_offset": 2048, 00:17:37.515 "data_size": 63488 00:17:37.515 }, 00:17:37.515 { 00:17:37.515 "name": "BaseBdev4", 00:17:37.515 "uuid": "345e4ad3-8ff2-417d-99b7-d0835e94fa6a", 00:17:37.515 "is_configured": true, 00:17:37.515 "data_offset": 2048, 00:17:37.515 "data_size": 63488 00:17:37.515 } 00:17:37.515 ] 00:17:37.515 }' 00:17:37.515 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.515 10:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:38.083 [2024-07-15 10:25:02.766929] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.083 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:38.083 "name": "Existed_Raid", 00:17:38.083 "aliases": [ 00:17:38.083 "f6efbd49-5296-49fa-b03b-46c61c160096" 00:17:38.083 ], 00:17:38.083 "product_name": "Raid Volume", 00:17:38.083 "block_size": 512, 00:17:38.083 "num_blocks": 63488, 00:17:38.083 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:38.083 "assigned_rate_limits": { 00:17:38.083 "rw_ios_per_sec": 0, 00:17:38.083 "rw_mbytes_per_sec": 0, 00:17:38.083 "r_mbytes_per_sec": 0, 00:17:38.083 "w_mbytes_per_sec": 0 00:17:38.083 }, 00:17:38.083 "claimed": false, 00:17:38.083 "zoned": false, 00:17:38.083 "supported_io_types": { 00:17:38.083 "read": true, 00:17:38.083 "write": true, 00:17:38.083 "unmap": false, 00:17:38.083 "flush": false, 00:17:38.083 "reset": true, 00:17:38.083 "nvme_admin": false, 00:17:38.083 "nvme_io": false, 00:17:38.083 "nvme_io_md": false, 00:17:38.083 "write_zeroes": true, 00:17:38.083 "zcopy": false, 00:17:38.083 "get_zone_info": false, 00:17:38.083 "zone_management": false, 00:17:38.083 "zone_append": false, 00:17:38.083 "compare": false, 00:17:38.083 "compare_and_write": false, 00:17:38.083 "abort": false, 00:17:38.083 "seek_hole": false, 00:17:38.083 "seek_data": false, 00:17:38.083 "copy": false, 00:17:38.083 "nvme_iov_md": false 00:17:38.083 }, 00:17:38.083 "memory_domains": [ 00:17:38.083 { 00:17:38.083 "dma_device_id": "system", 00:17:38.083 "dma_device_type": 1 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.083 "dma_device_type": 2 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "system", 00:17:38.083 "dma_device_type": 1 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.083 "dma_device_type": 2 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "system", 00:17:38.083 "dma_device_type": 1 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.083 "dma_device_type": 2 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "system", 00:17:38.083 "dma_device_type": 1 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.083 "dma_device_type": 2 00:17:38.083 } 00:17:38.083 ], 00:17:38.083 "driver_specific": { 00:17:38.083 "raid": { 00:17:38.083 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:38.083 "strip_size_kb": 0, 00:17:38.083 "state": "online", 00:17:38.083 "raid_level": "raid1", 00:17:38.083 "superblock": true, 00:17:38.083 "num_base_bdevs": 4, 00:17:38.083 "num_base_bdevs_discovered": 4, 00:17:38.083 "num_base_bdevs_operational": 4, 00:17:38.083 "base_bdevs_list": [ 00:17:38.083 { 00:17:38.083 "name": "BaseBdev1", 00:17:38.083 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:38.083 "is_configured": true, 00:17:38.083 "data_offset": 2048, 00:17:38.083 "data_size": 63488 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "name": "BaseBdev2", 00:17:38.083 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:38.083 "is_configured": true, 00:17:38.083 "data_offset": 2048, 00:17:38.083 "data_size": 63488 00:17:38.083 }, 00:17:38.083 { 00:17:38.083 "name": "BaseBdev3", 00:17:38.084 "uuid": "0f5287cd-042b-49e5-9849-1d83f0447e00", 00:17:38.084 "is_configured": true, 00:17:38.084 "data_offset": 2048, 00:17:38.084 "data_size": 63488 00:17:38.084 }, 00:17:38.084 { 00:17:38.084 "name": "BaseBdev4", 00:17:38.084 "uuid": "345e4ad3-8ff2-417d-99b7-d0835e94fa6a", 00:17:38.084 "is_configured": true, 00:17:38.084 "data_offset": 2048, 00:17:38.084 "data_size": 63488 00:17:38.084 } 00:17:38.084 ] 00:17:38.084 } 00:17:38.084 } 00:17:38.084 }' 00:17:38.084 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:38.084 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:38.084 BaseBdev2 00:17:38.084 BaseBdev3 00:17:38.084 BaseBdev4' 00:17:38.084 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.084 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:38.084 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.342 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.342 "name": "BaseBdev1", 00:17:38.342 "aliases": [ 00:17:38.342 "230670c9-f6cc-4724-ab28-ade79b06a914" 00:17:38.342 ], 00:17:38.342 "product_name": "Malloc disk", 00:17:38.342 "block_size": 512, 00:17:38.342 "num_blocks": 65536, 00:17:38.342 "uuid": "230670c9-f6cc-4724-ab28-ade79b06a914", 00:17:38.342 "assigned_rate_limits": { 00:17:38.342 "rw_ios_per_sec": 0, 00:17:38.342 "rw_mbytes_per_sec": 0, 00:17:38.342 "r_mbytes_per_sec": 0, 00:17:38.342 "w_mbytes_per_sec": 0 00:17:38.342 }, 00:17:38.342 "claimed": true, 00:17:38.342 "claim_type": "exclusive_write", 00:17:38.342 "zoned": false, 00:17:38.342 "supported_io_types": { 00:17:38.342 "read": true, 00:17:38.342 "write": true, 00:17:38.342 "unmap": true, 00:17:38.342 "flush": true, 00:17:38.342 "reset": true, 00:17:38.342 "nvme_admin": false, 00:17:38.342 "nvme_io": false, 00:17:38.343 "nvme_io_md": false, 00:17:38.343 "write_zeroes": true, 00:17:38.343 "zcopy": true, 00:17:38.343 "get_zone_info": false, 00:17:38.343 "zone_management": false, 00:17:38.343 "zone_append": false, 00:17:38.343 "compare": false, 00:17:38.343 "compare_and_write": false, 00:17:38.343 "abort": true, 00:17:38.343 "seek_hole": false, 00:17:38.343 "seek_data": false, 00:17:38.343 "copy": true, 00:17:38.343 "nvme_iov_md": false 00:17:38.343 }, 00:17:38.343 "memory_domains": [ 00:17:38.343 { 00:17:38.343 "dma_device_id": "system", 00:17:38.343 "dma_device_type": 1 00:17:38.343 }, 00:17:38.343 { 00:17:38.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.343 "dma_device_type": 2 00:17:38.343 } 00:17:38.343 ], 00:17:38.343 "driver_specific": {} 00:17:38.343 }' 00:17:38.343 10:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.343 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.343 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:38.343 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.343 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:38.601 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.860 "name": "BaseBdev2", 00:17:38.860 "aliases": [ 00:17:38.860 "d207b54b-28b7-47ec-b0c7-86d113cc7a24" 00:17:38.860 ], 00:17:38.860 "product_name": "Malloc disk", 00:17:38.860 "block_size": 512, 00:17:38.860 "num_blocks": 65536, 00:17:38.860 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:38.860 "assigned_rate_limits": { 00:17:38.860 "rw_ios_per_sec": 0, 00:17:38.860 "rw_mbytes_per_sec": 0, 00:17:38.860 "r_mbytes_per_sec": 0, 00:17:38.860 "w_mbytes_per_sec": 0 00:17:38.860 }, 00:17:38.860 "claimed": true, 00:17:38.860 "claim_type": "exclusive_write", 00:17:38.860 "zoned": false, 00:17:38.860 "supported_io_types": { 00:17:38.860 "read": true, 00:17:38.860 "write": true, 00:17:38.860 "unmap": true, 00:17:38.860 "flush": true, 00:17:38.860 "reset": true, 00:17:38.860 "nvme_admin": false, 00:17:38.860 "nvme_io": false, 00:17:38.860 "nvme_io_md": false, 00:17:38.860 "write_zeroes": true, 00:17:38.860 "zcopy": true, 00:17:38.860 "get_zone_info": false, 00:17:38.860 "zone_management": false, 00:17:38.860 "zone_append": false, 00:17:38.860 "compare": false, 00:17:38.860 "compare_and_write": false, 00:17:38.860 "abort": true, 00:17:38.860 "seek_hole": false, 00:17:38.860 "seek_data": false, 00:17:38.860 "copy": true, 00:17:38.860 "nvme_iov_md": false 00:17:38.860 }, 00:17:38.860 "memory_domains": [ 00:17:38.860 { 00:17:38.860 "dma_device_id": "system", 00:17:38.860 "dma_device_type": 1 00:17:38.860 }, 00:17:38.860 { 00:17:38.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.860 "dma_device_type": 2 00:17:38.860 } 00:17:38.860 ], 00:17:38.860 "driver_specific": {} 00:17:38.860 }' 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:38.860 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:39.118 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.376 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.376 "name": "BaseBdev3", 00:17:39.376 "aliases": [ 00:17:39.376 "0f5287cd-042b-49e5-9849-1d83f0447e00" 00:17:39.376 ], 00:17:39.376 "product_name": "Malloc disk", 00:17:39.376 "block_size": 512, 00:17:39.376 "num_blocks": 65536, 00:17:39.376 "uuid": "0f5287cd-042b-49e5-9849-1d83f0447e00", 00:17:39.376 "assigned_rate_limits": { 00:17:39.376 "rw_ios_per_sec": 0, 00:17:39.376 "rw_mbytes_per_sec": 0, 00:17:39.376 "r_mbytes_per_sec": 0, 00:17:39.376 "w_mbytes_per_sec": 0 00:17:39.376 }, 00:17:39.376 "claimed": true, 00:17:39.376 "claim_type": "exclusive_write", 00:17:39.376 "zoned": false, 00:17:39.376 "supported_io_types": { 00:17:39.376 "read": true, 00:17:39.376 "write": true, 00:17:39.376 "unmap": true, 00:17:39.376 "flush": true, 00:17:39.376 "reset": true, 00:17:39.376 "nvme_admin": false, 00:17:39.376 "nvme_io": false, 00:17:39.376 "nvme_io_md": false, 00:17:39.376 "write_zeroes": true, 00:17:39.376 "zcopy": true, 00:17:39.376 "get_zone_info": false, 00:17:39.376 "zone_management": false, 00:17:39.376 "zone_append": false, 00:17:39.376 "compare": false, 00:17:39.376 "compare_and_write": false, 00:17:39.376 "abort": true, 00:17:39.376 "seek_hole": false, 00:17:39.376 "seek_data": false, 00:17:39.376 "copy": true, 00:17:39.376 "nvme_iov_md": false 00:17:39.376 }, 00:17:39.376 "memory_domains": [ 00:17:39.376 { 00:17:39.376 "dma_device_id": "system", 00:17:39.376 "dma_device_type": 1 00:17:39.376 }, 00:17:39.376 { 00:17:39.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.376 "dma_device_type": 2 00:17:39.376 } 00:17:39.376 ], 00:17:39.376 "driver_specific": {} 00:17:39.376 }' 00:17:39.376 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.376 10:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.376 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.376 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.376 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.376 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.376 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.376 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.634 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.892 "name": "BaseBdev4", 00:17:39.892 "aliases": [ 00:17:39.892 "345e4ad3-8ff2-417d-99b7-d0835e94fa6a" 00:17:39.892 ], 00:17:39.892 "product_name": "Malloc disk", 00:17:39.892 "block_size": 512, 00:17:39.892 "num_blocks": 65536, 00:17:39.892 "uuid": "345e4ad3-8ff2-417d-99b7-d0835e94fa6a", 00:17:39.892 "assigned_rate_limits": { 00:17:39.892 "rw_ios_per_sec": 0, 00:17:39.892 "rw_mbytes_per_sec": 0, 00:17:39.892 "r_mbytes_per_sec": 0, 00:17:39.892 "w_mbytes_per_sec": 0 00:17:39.892 }, 00:17:39.892 "claimed": true, 00:17:39.892 "claim_type": "exclusive_write", 00:17:39.892 "zoned": false, 00:17:39.892 "supported_io_types": { 00:17:39.892 "read": true, 00:17:39.892 "write": true, 00:17:39.892 "unmap": true, 00:17:39.892 "flush": true, 00:17:39.892 "reset": true, 00:17:39.892 "nvme_admin": false, 00:17:39.892 "nvme_io": false, 00:17:39.892 "nvme_io_md": false, 00:17:39.892 "write_zeroes": true, 00:17:39.892 "zcopy": true, 00:17:39.892 "get_zone_info": false, 00:17:39.892 "zone_management": false, 00:17:39.892 "zone_append": false, 00:17:39.892 "compare": false, 00:17:39.892 "compare_and_write": false, 00:17:39.892 "abort": true, 00:17:39.892 "seek_hole": false, 00:17:39.892 "seek_data": false, 00:17:39.892 "copy": true, 00:17:39.892 "nvme_iov_md": false 00:17:39.892 }, 00:17:39.892 "memory_domains": [ 00:17:39.892 { 00:17:39.892 "dma_device_id": "system", 00:17:39.892 "dma_device_type": 1 00:17:39.892 }, 00:17:39.892 { 00:17:39.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.892 "dma_device_type": 2 00:17:39.892 } 00:17:39.892 ], 00:17:39.892 "driver_specific": {} 00:17:39.892 }' 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.892 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.149 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.149 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:40.149 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:40.149 [2024-07-15 10:25:04.884221] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.149 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:40.149 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:40.149 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.150 10:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.408 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.408 "name": "Existed_Raid", 00:17:40.408 "uuid": "f6efbd49-5296-49fa-b03b-46c61c160096", 00:17:40.408 "strip_size_kb": 0, 00:17:40.408 "state": "online", 00:17:40.408 "raid_level": "raid1", 00:17:40.408 "superblock": true, 00:17:40.408 "num_base_bdevs": 4, 00:17:40.408 "num_base_bdevs_discovered": 3, 00:17:40.408 "num_base_bdevs_operational": 3, 00:17:40.408 "base_bdevs_list": [ 00:17:40.408 { 00:17:40.408 "name": null, 00:17:40.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.408 "is_configured": false, 00:17:40.408 "data_offset": 2048, 00:17:40.408 "data_size": 63488 00:17:40.408 }, 00:17:40.408 { 00:17:40.408 "name": "BaseBdev2", 00:17:40.408 "uuid": "d207b54b-28b7-47ec-b0c7-86d113cc7a24", 00:17:40.408 "is_configured": true, 00:17:40.408 "data_offset": 2048, 00:17:40.408 "data_size": 63488 00:17:40.408 }, 00:17:40.408 { 00:17:40.408 "name": "BaseBdev3", 00:17:40.408 "uuid": "0f5287cd-042b-49e5-9849-1d83f0447e00", 00:17:40.408 "is_configured": true, 00:17:40.408 "data_offset": 2048, 00:17:40.408 "data_size": 63488 00:17:40.408 }, 00:17:40.408 { 00:17:40.408 "name": "BaseBdev4", 00:17:40.408 "uuid": "345e4ad3-8ff2-417d-99b7-d0835e94fa6a", 00:17:40.408 "is_configured": true, 00:17:40.408 "data_offset": 2048, 00:17:40.408 "data_size": 63488 00:17:40.408 } 00:17:40.408 ] 00:17:40.408 }' 00:17:40.408 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.408 10:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:40.975 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:41.234 [2024-07-15 10:25:05.879690] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:41.234 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:41.234 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:41.234 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.234 10:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:41.492 [2024-07-15 10:25:06.217962] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.492 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:41.749 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:41.749 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.749 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:42.005 [2024-07-15 10:25:06.552302] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:42.005 [2024-07-15 10:25:06.552358] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.005 [2024-07-15 10:25:06.562170] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.005 [2024-07-15 10:25:06.562214] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.005 [2024-07-15 10:25:06.562221] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb7830 name Existed_Raid, state offline 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:42.005 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:42.262 BaseBdev2 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.262 10:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.519 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.519 [ 00:17:42.519 { 00:17:42.519 "name": "BaseBdev2", 00:17:42.519 "aliases": [ 00:17:42.519 "9838071d-a5ef-4baf-b3aa-7c963137b25f" 00:17:42.519 ], 00:17:42.519 "product_name": "Malloc disk", 00:17:42.519 "block_size": 512, 00:17:42.519 "num_blocks": 65536, 00:17:42.519 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:42.519 "assigned_rate_limits": { 00:17:42.519 "rw_ios_per_sec": 0, 00:17:42.519 "rw_mbytes_per_sec": 0, 00:17:42.519 "r_mbytes_per_sec": 0, 00:17:42.519 "w_mbytes_per_sec": 0 00:17:42.519 }, 00:17:42.519 "claimed": false, 00:17:42.519 "zoned": false, 00:17:42.519 "supported_io_types": { 00:17:42.519 "read": true, 00:17:42.519 "write": true, 00:17:42.519 "unmap": true, 00:17:42.519 "flush": true, 00:17:42.519 "reset": true, 00:17:42.519 "nvme_admin": false, 00:17:42.519 "nvme_io": false, 00:17:42.519 "nvme_io_md": false, 00:17:42.519 "write_zeroes": true, 00:17:42.519 "zcopy": true, 00:17:42.519 "get_zone_info": false, 00:17:42.519 "zone_management": false, 00:17:42.519 "zone_append": false, 00:17:42.519 "compare": false, 00:17:42.519 "compare_and_write": false, 00:17:42.519 "abort": true, 00:17:42.519 "seek_hole": false, 00:17:42.519 "seek_data": false, 00:17:42.519 "copy": true, 00:17:42.519 "nvme_iov_md": false 00:17:42.519 }, 00:17:42.519 "memory_domains": [ 00:17:42.519 { 00:17:42.519 "dma_device_id": "system", 00:17:42.519 "dma_device_type": 1 00:17:42.519 }, 00:17:42.519 { 00:17:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.519 "dma_device_type": 2 00:17:42.519 } 00:17:42.519 ], 00:17:42.519 "driver_specific": {} 00:17:42.519 } 00:17:42.519 ] 00:17:42.519 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:42.519 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:42.519 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:42.519 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:42.776 BaseBdev3 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:42.776 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.037 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:43.037 [ 00:17:43.037 { 00:17:43.037 "name": "BaseBdev3", 00:17:43.037 "aliases": [ 00:17:43.037 "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe" 00:17:43.037 ], 00:17:43.037 "product_name": "Malloc disk", 00:17:43.037 "block_size": 512, 00:17:43.037 "num_blocks": 65536, 00:17:43.037 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:43.037 "assigned_rate_limits": { 00:17:43.037 "rw_ios_per_sec": 0, 00:17:43.037 "rw_mbytes_per_sec": 0, 00:17:43.037 "r_mbytes_per_sec": 0, 00:17:43.037 "w_mbytes_per_sec": 0 00:17:43.037 }, 00:17:43.037 "claimed": false, 00:17:43.037 "zoned": false, 00:17:43.037 "supported_io_types": { 00:17:43.037 "read": true, 00:17:43.037 "write": true, 00:17:43.037 "unmap": true, 00:17:43.037 "flush": true, 00:17:43.037 "reset": true, 00:17:43.037 "nvme_admin": false, 00:17:43.037 "nvme_io": false, 00:17:43.037 "nvme_io_md": false, 00:17:43.037 "write_zeroes": true, 00:17:43.037 "zcopy": true, 00:17:43.037 "get_zone_info": false, 00:17:43.037 "zone_management": false, 00:17:43.037 "zone_append": false, 00:17:43.037 "compare": false, 00:17:43.037 "compare_and_write": false, 00:17:43.037 "abort": true, 00:17:43.037 "seek_hole": false, 00:17:43.037 "seek_data": false, 00:17:43.037 "copy": true, 00:17:43.037 "nvme_iov_md": false 00:17:43.037 }, 00:17:43.037 "memory_domains": [ 00:17:43.037 { 00:17:43.037 "dma_device_id": "system", 00:17:43.037 "dma_device_type": 1 00:17:43.037 }, 00:17:43.037 { 00:17:43.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.037 "dma_device_type": 2 00:17:43.037 } 00:17:43.037 ], 00:17:43.037 "driver_specific": {} 00:17:43.037 } 00:17:43.037 ] 00:17:43.037 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:43.037 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:43.037 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:43.037 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:43.295 BaseBdev4 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.295 10:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.295 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:43.554 [ 00:17:43.554 { 00:17:43.554 "name": "BaseBdev4", 00:17:43.554 "aliases": [ 00:17:43.554 "f5ddea98-f21e-4537-8524-3a161b00aee6" 00:17:43.554 ], 00:17:43.554 "product_name": "Malloc disk", 00:17:43.554 "block_size": 512, 00:17:43.554 "num_blocks": 65536, 00:17:43.554 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:43.554 "assigned_rate_limits": { 00:17:43.554 "rw_ios_per_sec": 0, 00:17:43.554 "rw_mbytes_per_sec": 0, 00:17:43.554 "r_mbytes_per_sec": 0, 00:17:43.554 "w_mbytes_per_sec": 0 00:17:43.554 }, 00:17:43.554 "claimed": false, 00:17:43.554 "zoned": false, 00:17:43.554 "supported_io_types": { 00:17:43.554 "read": true, 00:17:43.554 "write": true, 00:17:43.554 "unmap": true, 00:17:43.554 "flush": true, 00:17:43.554 "reset": true, 00:17:43.554 "nvme_admin": false, 00:17:43.554 "nvme_io": false, 00:17:43.554 "nvme_io_md": false, 00:17:43.554 "write_zeroes": true, 00:17:43.554 "zcopy": true, 00:17:43.554 "get_zone_info": false, 00:17:43.554 "zone_management": false, 00:17:43.554 "zone_append": false, 00:17:43.554 "compare": false, 00:17:43.554 "compare_and_write": false, 00:17:43.554 "abort": true, 00:17:43.554 "seek_hole": false, 00:17:43.554 "seek_data": false, 00:17:43.554 "copy": true, 00:17:43.554 "nvme_iov_md": false 00:17:43.554 }, 00:17:43.554 "memory_domains": [ 00:17:43.554 { 00:17:43.554 "dma_device_id": "system", 00:17:43.554 "dma_device_type": 1 00:17:43.554 }, 00:17:43.554 { 00:17:43.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.554 "dma_device_type": 2 00:17:43.554 } 00:17:43.554 ], 00:17:43.554 "driver_specific": {} 00:17:43.554 } 00:17:43.554 ] 00:17:43.554 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:43.554 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:43.554 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:43.554 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:43.874 [2024-07-15 10:25:08.390082] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.874 [2024-07-15 10:25:08.390112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.874 [2024-07-15 10:25:08.390125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.874 [2024-07-15 10:25:08.391035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:43.875 [2024-07-15 10:25:08.391064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.875 "name": "Existed_Raid", 00:17:43.875 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:43.875 "strip_size_kb": 0, 00:17:43.875 "state": "configuring", 00:17:43.875 "raid_level": "raid1", 00:17:43.875 "superblock": true, 00:17:43.875 "num_base_bdevs": 4, 00:17:43.875 "num_base_bdevs_discovered": 3, 00:17:43.875 "num_base_bdevs_operational": 4, 00:17:43.875 "base_bdevs_list": [ 00:17:43.875 { 00:17:43.875 "name": "BaseBdev1", 00:17:43.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.875 "is_configured": false, 00:17:43.875 "data_offset": 0, 00:17:43.875 "data_size": 0 00:17:43.875 }, 00:17:43.875 { 00:17:43.875 "name": "BaseBdev2", 00:17:43.875 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:43.875 "is_configured": true, 00:17:43.875 "data_offset": 2048, 00:17:43.875 "data_size": 63488 00:17:43.875 }, 00:17:43.875 { 00:17:43.875 "name": "BaseBdev3", 00:17:43.875 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:43.875 "is_configured": true, 00:17:43.875 "data_offset": 2048, 00:17:43.875 "data_size": 63488 00:17:43.875 }, 00:17:43.875 { 00:17:43.875 "name": "BaseBdev4", 00:17:43.875 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:43.875 "is_configured": true, 00:17:43.875 "data_offset": 2048, 00:17:43.875 "data_size": 63488 00:17:43.875 } 00:17:43.875 ] 00:17:43.875 }' 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.875 10:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.446 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:44.446 [2024-07-15 10:25:09.224182] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.704 "name": "Existed_Raid", 00:17:44.704 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:44.704 "strip_size_kb": 0, 00:17:44.704 "state": "configuring", 00:17:44.704 "raid_level": "raid1", 00:17:44.704 "superblock": true, 00:17:44.704 "num_base_bdevs": 4, 00:17:44.704 "num_base_bdevs_discovered": 2, 00:17:44.704 "num_base_bdevs_operational": 4, 00:17:44.704 "base_bdevs_list": [ 00:17:44.704 { 00:17:44.704 "name": "BaseBdev1", 00:17:44.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.704 "is_configured": false, 00:17:44.704 "data_offset": 0, 00:17:44.704 "data_size": 0 00:17:44.704 }, 00:17:44.704 { 00:17:44.704 "name": null, 00:17:44.704 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:44.704 "is_configured": false, 00:17:44.704 "data_offset": 2048, 00:17:44.704 "data_size": 63488 00:17:44.704 }, 00:17:44.704 { 00:17:44.704 "name": "BaseBdev3", 00:17:44.704 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:44.704 "is_configured": true, 00:17:44.704 "data_offset": 2048, 00:17:44.704 "data_size": 63488 00:17:44.704 }, 00:17:44.704 { 00:17:44.704 "name": "BaseBdev4", 00:17:44.704 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:44.704 "is_configured": true, 00:17:44.704 "data_offset": 2048, 00:17:44.704 "data_size": 63488 00:17:44.704 } 00:17:44.704 ] 00:17:44.704 }' 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.704 10:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.272 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:45.272 10:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.272 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:45.272 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:45.530 [2024-07-15 10:25:10.225608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.530 BaseBdev1 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.530 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.787 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.787 [ 00:17:45.787 { 00:17:45.787 "name": "BaseBdev1", 00:17:45.787 "aliases": [ 00:17:45.787 "2427e88a-7950-431b-adb1-745a0fa38997" 00:17:45.787 ], 00:17:45.787 "product_name": "Malloc disk", 00:17:45.787 "block_size": 512, 00:17:45.787 "num_blocks": 65536, 00:17:45.787 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:45.787 "assigned_rate_limits": { 00:17:45.787 "rw_ios_per_sec": 0, 00:17:45.787 "rw_mbytes_per_sec": 0, 00:17:45.787 "r_mbytes_per_sec": 0, 00:17:45.787 "w_mbytes_per_sec": 0 00:17:45.787 }, 00:17:45.787 "claimed": true, 00:17:45.787 "claim_type": "exclusive_write", 00:17:45.787 "zoned": false, 00:17:45.787 "supported_io_types": { 00:17:45.787 "read": true, 00:17:45.787 "write": true, 00:17:45.787 "unmap": true, 00:17:45.787 "flush": true, 00:17:45.787 "reset": true, 00:17:45.787 "nvme_admin": false, 00:17:45.787 "nvme_io": false, 00:17:45.787 "nvme_io_md": false, 00:17:45.787 "write_zeroes": true, 00:17:45.787 "zcopy": true, 00:17:45.787 "get_zone_info": false, 00:17:45.787 "zone_management": false, 00:17:45.787 "zone_append": false, 00:17:45.787 "compare": false, 00:17:45.787 "compare_and_write": false, 00:17:45.787 "abort": true, 00:17:45.787 "seek_hole": false, 00:17:45.787 "seek_data": false, 00:17:45.787 "copy": true, 00:17:45.787 "nvme_iov_md": false 00:17:45.787 }, 00:17:45.787 "memory_domains": [ 00:17:45.787 { 00:17:45.787 "dma_device_id": "system", 00:17:45.787 "dma_device_type": 1 00:17:45.787 }, 00:17:45.787 { 00:17:45.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.787 "dma_device_type": 2 00:17:45.787 } 00:17:45.787 ], 00:17:45.787 "driver_specific": {} 00:17:45.787 } 00:17:45.787 ] 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.045 "name": "Existed_Raid", 00:17:46.045 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:46.045 "strip_size_kb": 0, 00:17:46.045 "state": "configuring", 00:17:46.045 "raid_level": "raid1", 00:17:46.045 "superblock": true, 00:17:46.045 "num_base_bdevs": 4, 00:17:46.045 "num_base_bdevs_discovered": 3, 00:17:46.045 "num_base_bdevs_operational": 4, 00:17:46.045 "base_bdevs_list": [ 00:17:46.045 { 00:17:46.045 "name": "BaseBdev1", 00:17:46.045 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:46.045 "is_configured": true, 00:17:46.045 "data_offset": 2048, 00:17:46.045 "data_size": 63488 00:17:46.045 }, 00:17:46.045 { 00:17:46.045 "name": null, 00:17:46.045 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:46.045 "is_configured": false, 00:17:46.045 "data_offset": 2048, 00:17:46.045 "data_size": 63488 00:17:46.045 }, 00:17:46.045 { 00:17:46.045 "name": "BaseBdev3", 00:17:46.045 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:46.045 "is_configured": true, 00:17:46.045 "data_offset": 2048, 00:17:46.045 "data_size": 63488 00:17:46.045 }, 00:17:46.045 { 00:17:46.045 "name": "BaseBdev4", 00:17:46.045 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:46.045 "is_configured": true, 00:17:46.045 "data_offset": 2048, 00:17:46.045 "data_size": 63488 00:17:46.045 } 00:17:46.045 ] 00:17:46.045 }' 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.045 10:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.607 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:46.608 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.608 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:46.608 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:46.865 [2024-07-15 10:25:11.541008] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.865 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.121 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.121 "name": "Existed_Raid", 00:17:47.121 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:47.121 "strip_size_kb": 0, 00:17:47.121 "state": "configuring", 00:17:47.121 "raid_level": "raid1", 00:17:47.121 "superblock": true, 00:17:47.121 "num_base_bdevs": 4, 00:17:47.121 "num_base_bdevs_discovered": 2, 00:17:47.121 "num_base_bdevs_operational": 4, 00:17:47.121 "base_bdevs_list": [ 00:17:47.121 { 00:17:47.121 "name": "BaseBdev1", 00:17:47.121 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:47.121 "is_configured": true, 00:17:47.121 "data_offset": 2048, 00:17:47.121 "data_size": 63488 00:17:47.121 }, 00:17:47.121 { 00:17:47.121 "name": null, 00:17:47.121 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:47.121 "is_configured": false, 00:17:47.121 "data_offset": 2048, 00:17:47.121 "data_size": 63488 00:17:47.121 }, 00:17:47.121 { 00:17:47.121 "name": null, 00:17:47.121 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:47.121 "is_configured": false, 00:17:47.121 "data_offset": 2048, 00:17:47.121 "data_size": 63488 00:17:47.121 }, 00:17:47.121 { 00:17:47.121 "name": "BaseBdev4", 00:17:47.121 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:47.121 "is_configured": true, 00:17:47.121 "data_offset": 2048, 00:17:47.121 "data_size": 63488 00:17:47.121 } 00:17:47.121 ] 00:17:47.121 }' 00:17:47.121 10:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.122 10:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.687 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.687 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:47.687 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:47.687 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:47.945 [2024-07-15 10:25:12.551623] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.945 "name": "Existed_Raid", 00:17:47.945 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:47.945 "strip_size_kb": 0, 00:17:47.945 "state": "configuring", 00:17:47.945 "raid_level": "raid1", 00:17:47.945 "superblock": true, 00:17:47.945 "num_base_bdevs": 4, 00:17:47.945 "num_base_bdevs_discovered": 3, 00:17:47.945 "num_base_bdevs_operational": 4, 00:17:47.945 "base_bdevs_list": [ 00:17:47.945 { 00:17:47.945 "name": "BaseBdev1", 00:17:47.945 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:47.945 "is_configured": true, 00:17:47.945 "data_offset": 2048, 00:17:47.945 "data_size": 63488 00:17:47.945 }, 00:17:47.945 { 00:17:47.945 "name": null, 00:17:47.945 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:47.945 "is_configured": false, 00:17:47.945 "data_offset": 2048, 00:17:47.945 "data_size": 63488 00:17:47.945 }, 00:17:47.945 { 00:17:47.945 "name": "BaseBdev3", 00:17:47.945 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:47.945 "is_configured": true, 00:17:47.945 "data_offset": 2048, 00:17:47.945 "data_size": 63488 00:17:47.945 }, 00:17:47.945 { 00:17:47.945 "name": "BaseBdev4", 00:17:47.945 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:47.945 "is_configured": true, 00:17:47.945 "data_offset": 2048, 00:17:47.945 "data_size": 63488 00:17:47.945 } 00:17:47.945 ] 00:17:47.945 }' 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.945 10:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.560 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:48.560 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.818 [2024-07-15 10:25:13.550210] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.818 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.076 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.076 "name": "Existed_Raid", 00:17:49.076 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:49.076 "strip_size_kb": 0, 00:17:49.076 "state": "configuring", 00:17:49.076 "raid_level": "raid1", 00:17:49.076 "superblock": true, 00:17:49.076 "num_base_bdevs": 4, 00:17:49.076 "num_base_bdevs_discovered": 2, 00:17:49.076 "num_base_bdevs_operational": 4, 00:17:49.076 "base_bdevs_list": [ 00:17:49.076 { 00:17:49.076 "name": null, 00:17:49.076 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:49.076 "is_configured": false, 00:17:49.076 "data_offset": 2048, 00:17:49.076 "data_size": 63488 00:17:49.076 }, 00:17:49.076 { 00:17:49.076 "name": null, 00:17:49.076 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:49.076 "is_configured": false, 00:17:49.076 "data_offset": 2048, 00:17:49.076 "data_size": 63488 00:17:49.076 }, 00:17:49.076 { 00:17:49.076 "name": "BaseBdev3", 00:17:49.076 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:49.076 "is_configured": true, 00:17:49.076 "data_offset": 2048, 00:17:49.076 "data_size": 63488 00:17:49.076 }, 00:17:49.076 { 00:17:49.076 "name": "BaseBdev4", 00:17:49.076 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:49.076 "is_configured": true, 00:17:49.076 "data_offset": 2048, 00:17:49.076 "data_size": 63488 00:17:49.076 } 00:17:49.076 ] 00:17:49.076 }' 00:17:49.076 10:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.076 10:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.642 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.642 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:49.642 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:49.642 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:49.900 [2024-07-15 10:25:14.522141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.900 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.158 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.158 "name": "Existed_Raid", 00:17:50.158 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:50.158 "strip_size_kb": 0, 00:17:50.158 "state": "configuring", 00:17:50.158 "raid_level": "raid1", 00:17:50.158 "superblock": true, 00:17:50.158 "num_base_bdevs": 4, 00:17:50.158 "num_base_bdevs_discovered": 3, 00:17:50.158 "num_base_bdevs_operational": 4, 00:17:50.158 "base_bdevs_list": [ 00:17:50.158 { 00:17:50.158 "name": null, 00:17:50.159 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:50.159 "is_configured": false, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 }, 00:17:50.159 { 00:17:50.159 "name": "BaseBdev2", 00:17:50.159 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 }, 00:17:50.159 { 00:17:50.159 "name": "BaseBdev3", 00:17:50.159 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 }, 00:17:50.159 { 00:17:50.159 "name": "BaseBdev4", 00:17:50.159 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:50.159 "is_configured": true, 00:17:50.159 "data_offset": 2048, 00:17:50.159 "data_size": 63488 00:17:50.159 } 00:17:50.159 ] 00:17:50.159 }' 00:17:50.159 10:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.159 10:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.725 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.725 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:50.725 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:50.725 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.725 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2427e88a-7950-431b-adb1-745a0fa38997 00:17:50.983 [2024-07-15 10:25:15.687855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:50.983 [2024-07-15 10:25:15.688010] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfaef40 00:17:50.983 [2024-07-15 10:25:15.688025] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:50.983 [2024-07-15 10:25:15.688143] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfa3690 00:17:50.983 [2024-07-15 10:25:15.688226] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfaef40 00:17:50.983 [2024-07-15 10:25:15.688232] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfaef40 00:17:50.983 [2024-07-15 10:25:15.688294] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.983 NewBaseBdev 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:50.983 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.242 10:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:51.242 [ 00:17:51.242 { 00:17:51.242 "name": "NewBaseBdev", 00:17:51.242 "aliases": [ 00:17:51.242 "2427e88a-7950-431b-adb1-745a0fa38997" 00:17:51.242 ], 00:17:51.242 "product_name": "Malloc disk", 00:17:51.242 "block_size": 512, 00:17:51.242 "num_blocks": 65536, 00:17:51.242 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:51.242 "assigned_rate_limits": { 00:17:51.242 "rw_ios_per_sec": 0, 00:17:51.242 "rw_mbytes_per_sec": 0, 00:17:51.242 "r_mbytes_per_sec": 0, 00:17:51.242 "w_mbytes_per_sec": 0 00:17:51.242 }, 00:17:51.242 "claimed": true, 00:17:51.242 "claim_type": "exclusive_write", 00:17:51.242 "zoned": false, 00:17:51.242 "supported_io_types": { 00:17:51.242 "read": true, 00:17:51.242 "write": true, 00:17:51.242 "unmap": true, 00:17:51.242 "flush": true, 00:17:51.242 "reset": true, 00:17:51.242 "nvme_admin": false, 00:17:51.242 "nvme_io": false, 00:17:51.242 "nvme_io_md": false, 00:17:51.242 "write_zeroes": true, 00:17:51.242 "zcopy": true, 00:17:51.242 "get_zone_info": false, 00:17:51.242 "zone_management": false, 00:17:51.242 "zone_append": false, 00:17:51.242 "compare": false, 00:17:51.242 "compare_and_write": false, 00:17:51.242 "abort": true, 00:17:51.242 "seek_hole": false, 00:17:51.242 "seek_data": false, 00:17:51.242 "copy": true, 00:17:51.242 "nvme_iov_md": false 00:17:51.242 }, 00:17:51.242 "memory_domains": [ 00:17:51.242 { 00:17:51.242 "dma_device_id": "system", 00:17:51.242 "dma_device_type": 1 00:17:51.242 }, 00:17:51.242 { 00:17:51.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.242 "dma_device_type": 2 00:17:51.242 } 00:17:51.242 ], 00:17:51.242 "driver_specific": {} 00:17:51.242 } 00:17:51.242 ] 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.500 "name": "Existed_Raid", 00:17:51.500 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:51.500 "strip_size_kb": 0, 00:17:51.500 "state": "online", 00:17:51.500 "raid_level": "raid1", 00:17:51.500 "superblock": true, 00:17:51.500 "num_base_bdevs": 4, 00:17:51.500 "num_base_bdevs_discovered": 4, 00:17:51.500 "num_base_bdevs_operational": 4, 00:17:51.500 "base_bdevs_list": [ 00:17:51.500 { 00:17:51.500 "name": "NewBaseBdev", 00:17:51.500 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:51.500 "is_configured": true, 00:17:51.500 "data_offset": 2048, 00:17:51.500 "data_size": 63488 00:17:51.500 }, 00:17:51.500 { 00:17:51.500 "name": "BaseBdev2", 00:17:51.500 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:51.500 "is_configured": true, 00:17:51.500 "data_offset": 2048, 00:17:51.500 "data_size": 63488 00:17:51.500 }, 00:17:51.500 { 00:17:51.500 "name": "BaseBdev3", 00:17:51.500 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:51.500 "is_configured": true, 00:17:51.500 "data_offset": 2048, 00:17:51.500 "data_size": 63488 00:17:51.500 }, 00:17:51.500 { 00:17:51.500 "name": "BaseBdev4", 00:17:51.500 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:51.500 "is_configured": true, 00:17:51.500 "data_offset": 2048, 00:17:51.500 "data_size": 63488 00:17:51.500 } 00:17:51.500 ] 00:17:51.500 }' 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.500 10:25:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:52.066 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:52.066 [2024-07-15 10:25:16.839067] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.324 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:52.325 "name": "Existed_Raid", 00:17:52.325 "aliases": [ 00:17:52.325 "2c83c85b-19e5-4410-9e27-ec28dfcf5017" 00:17:52.325 ], 00:17:52.325 "product_name": "Raid Volume", 00:17:52.325 "block_size": 512, 00:17:52.325 "num_blocks": 63488, 00:17:52.325 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:52.325 "assigned_rate_limits": { 00:17:52.325 "rw_ios_per_sec": 0, 00:17:52.325 "rw_mbytes_per_sec": 0, 00:17:52.325 "r_mbytes_per_sec": 0, 00:17:52.325 "w_mbytes_per_sec": 0 00:17:52.325 }, 00:17:52.325 "claimed": false, 00:17:52.325 "zoned": false, 00:17:52.325 "supported_io_types": { 00:17:52.325 "read": true, 00:17:52.325 "write": true, 00:17:52.325 "unmap": false, 00:17:52.325 "flush": false, 00:17:52.325 "reset": true, 00:17:52.325 "nvme_admin": false, 00:17:52.325 "nvme_io": false, 00:17:52.325 "nvme_io_md": false, 00:17:52.325 "write_zeroes": true, 00:17:52.325 "zcopy": false, 00:17:52.325 "get_zone_info": false, 00:17:52.325 "zone_management": false, 00:17:52.325 "zone_append": false, 00:17:52.325 "compare": false, 00:17:52.325 "compare_and_write": false, 00:17:52.325 "abort": false, 00:17:52.325 "seek_hole": false, 00:17:52.325 "seek_data": false, 00:17:52.325 "copy": false, 00:17:52.325 "nvme_iov_md": false 00:17:52.325 }, 00:17:52.325 "memory_domains": [ 00:17:52.325 { 00:17:52.325 "dma_device_id": "system", 00:17:52.325 "dma_device_type": 1 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.325 "dma_device_type": 2 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "system", 00:17:52.325 "dma_device_type": 1 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.325 "dma_device_type": 2 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "system", 00:17:52.325 "dma_device_type": 1 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.325 "dma_device_type": 2 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "system", 00:17:52.325 "dma_device_type": 1 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.325 "dma_device_type": 2 00:17:52.325 } 00:17:52.325 ], 00:17:52.325 "driver_specific": { 00:17:52.325 "raid": { 00:17:52.325 "uuid": "2c83c85b-19e5-4410-9e27-ec28dfcf5017", 00:17:52.325 "strip_size_kb": 0, 00:17:52.325 "state": "online", 00:17:52.325 "raid_level": "raid1", 00:17:52.325 "superblock": true, 00:17:52.325 "num_base_bdevs": 4, 00:17:52.325 "num_base_bdevs_discovered": 4, 00:17:52.325 "num_base_bdevs_operational": 4, 00:17:52.325 "base_bdevs_list": [ 00:17:52.325 { 00:17:52.325 "name": "NewBaseBdev", 00:17:52.325 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:52.325 "is_configured": true, 00:17:52.325 "data_offset": 2048, 00:17:52.325 "data_size": 63488 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "name": "BaseBdev2", 00:17:52.325 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:52.325 "is_configured": true, 00:17:52.325 "data_offset": 2048, 00:17:52.325 "data_size": 63488 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "name": "BaseBdev3", 00:17:52.325 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:52.325 "is_configured": true, 00:17:52.325 "data_offset": 2048, 00:17:52.325 "data_size": 63488 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "name": "BaseBdev4", 00:17:52.325 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:52.325 "is_configured": true, 00:17:52.325 "data_offset": 2048, 00:17:52.325 "data_size": 63488 00:17:52.325 } 00:17:52.325 ] 00:17:52.325 } 00:17:52.325 } 00:17:52.325 }' 00:17:52.325 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.325 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:52.325 BaseBdev2 00:17:52.325 BaseBdev3 00:17:52.325 BaseBdev4' 00:17:52.325 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.325 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:52.325 10:25:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.325 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:52.325 "name": "NewBaseBdev", 00:17:52.325 "aliases": [ 00:17:52.325 "2427e88a-7950-431b-adb1-745a0fa38997" 00:17:52.325 ], 00:17:52.325 "product_name": "Malloc disk", 00:17:52.325 "block_size": 512, 00:17:52.325 "num_blocks": 65536, 00:17:52.325 "uuid": "2427e88a-7950-431b-adb1-745a0fa38997", 00:17:52.325 "assigned_rate_limits": { 00:17:52.325 "rw_ios_per_sec": 0, 00:17:52.325 "rw_mbytes_per_sec": 0, 00:17:52.325 "r_mbytes_per_sec": 0, 00:17:52.325 "w_mbytes_per_sec": 0 00:17:52.325 }, 00:17:52.325 "claimed": true, 00:17:52.325 "claim_type": "exclusive_write", 00:17:52.325 "zoned": false, 00:17:52.325 "supported_io_types": { 00:17:52.325 "read": true, 00:17:52.325 "write": true, 00:17:52.325 "unmap": true, 00:17:52.325 "flush": true, 00:17:52.325 "reset": true, 00:17:52.325 "nvme_admin": false, 00:17:52.325 "nvme_io": false, 00:17:52.325 "nvme_io_md": false, 00:17:52.325 "write_zeroes": true, 00:17:52.325 "zcopy": true, 00:17:52.325 "get_zone_info": false, 00:17:52.325 "zone_management": false, 00:17:52.325 "zone_append": false, 00:17:52.325 "compare": false, 00:17:52.325 "compare_and_write": false, 00:17:52.325 "abort": true, 00:17:52.325 "seek_hole": false, 00:17:52.325 "seek_data": false, 00:17:52.325 "copy": true, 00:17:52.325 "nvme_iov_md": false 00:17:52.325 }, 00:17:52.325 "memory_domains": [ 00:17:52.325 { 00:17:52.325 "dma_device_id": "system", 00:17:52.325 "dma_device_type": 1 00:17:52.325 }, 00:17:52.325 { 00:17:52.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.325 "dma_device_type": 2 00:17:52.325 } 00:17:52.325 ], 00:17:52.325 "driver_specific": {} 00:17:52.325 }' 00:17:52.325 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.584 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:52.842 "name": "BaseBdev2", 00:17:52.842 "aliases": [ 00:17:52.842 "9838071d-a5ef-4baf-b3aa-7c963137b25f" 00:17:52.842 ], 00:17:52.842 "product_name": "Malloc disk", 00:17:52.842 "block_size": 512, 00:17:52.842 "num_blocks": 65536, 00:17:52.842 "uuid": "9838071d-a5ef-4baf-b3aa-7c963137b25f", 00:17:52.842 "assigned_rate_limits": { 00:17:52.842 "rw_ios_per_sec": 0, 00:17:52.842 "rw_mbytes_per_sec": 0, 00:17:52.842 "r_mbytes_per_sec": 0, 00:17:52.842 "w_mbytes_per_sec": 0 00:17:52.842 }, 00:17:52.842 "claimed": true, 00:17:52.842 "claim_type": "exclusive_write", 00:17:52.842 "zoned": false, 00:17:52.842 "supported_io_types": { 00:17:52.842 "read": true, 00:17:52.842 "write": true, 00:17:52.842 "unmap": true, 00:17:52.842 "flush": true, 00:17:52.842 "reset": true, 00:17:52.842 "nvme_admin": false, 00:17:52.842 "nvme_io": false, 00:17:52.842 "nvme_io_md": false, 00:17:52.842 "write_zeroes": true, 00:17:52.842 "zcopy": true, 00:17:52.842 "get_zone_info": false, 00:17:52.842 "zone_management": false, 00:17:52.842 "zone_append": false, 00:17:52.842 "compare": false, 00:17:52.842 "compare_and_write": false, 00:17:52.842 "abort": true, 00:17:52.842 "seek_hole": false, 00:17:52.842 "seek_data": false, 00:17:52.842 "copy": true, 00:17:52.842 "nvme_iov_md": false 00:17:52.842 }, 00:17:52.842 "memory_domains": [ 00:17:52.842 { 00:17:52.842 "dma_device_id": "system", 00:17:52.842 "dma_device_type": 1 00:17:52.842 }, 00:17:52.842 { 00:17:52.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.842 "dma_device_type": 2 00:17:52.842 } 00:17:52.842 ], 00:17:52.842 "driver_specific": {} 00:17:52.842 }' 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:52.842 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.101 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.359 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.359 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.359 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:53.359 10:25:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.359 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.359 "name": "BaseBdev3", 00:17:53.359 "aliases": [ 00:17:53.359 "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe" 00:17:53.359 ], 00:17:53.359 "product_name": "Malloc disk", 00:17:53.359 "block_size": 512, 00:17:53.359 "num_blocks": 65536, 00:17:53.359 "uuid": "efd0963e-fa6f-4e28-ba7d-2f5789d45dfe", 00:17:53.359 "assigned_rate_limits": { 00:17:53.359 "rw_ios_per_sec": 0, 00:17:53.359 "rw_mbytes_per_sec": 0, 00:17:53.359 "r_mbytes_per_sec": 0, 00:17:53.359 "w_mbytes_per_sec": 0 00:17:53.359 }, 00:17:53.359 "claimed": true, 00:17:53.359 "claim_type": "exclusive_write", 00:17:53.359 "zoned": false, 00:17:53.359 "supported_io_types": { 00:17:53.359 "read": true, 00:17:53.359 "write": true, 00:17:53.359 "unmap": true, 00:17:53.359 "flush": true, 00:17:53.359 "reset": true, 00:17:53.359 "nvme_admin": false, 00:17:53.359 "nvme_io": false, 00:17:53.359 "nvme_io_md": false, 00:17:53.359 "write_zeroes": true, 00:17:53.359 "zcopy": true, 00:17:53.359 "get_zone_info": false, 00:17:53.359 "zone_management": false, 00:17:53.359 "zone_append": false, 00:17:53.359 "compare": false, 00:17:53.359 "compare_and_write": false, 00:17:53.359 "abort": true, 00:17:53.359 "seek_hole": false, 00:17:53.359 "seek_data": false, 00:17:53.359 "copy": true, 00:17:53.359 "nvme_iov_md": false 00:17:53.359 }, 00:17:53.359 "memory_domains": [ 00:17:53.359 { 00:17:53.359 "dma_device_id": "system", 00:17:53.359 "dma_device_type": 1 00:17:53.359 }, 00:17:53.359 { 00:17:53.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.359 "dma_device_type": 2 00:17:53.359 } 00:17:53.359 ], 00:17:53.359 "driver_specific": {} 00:17:53.359 }' 00:17:53.359 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.359 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.617 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:17:53.875 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.875 "name": "BaseBdev4", 00:17:53.875 "aliases": [ 00:17:53.875 "f5ddea98-f21e-4537-8524-3a161b00aee6" 00:17:53.875 ], 00:17:53.875 "product_name": "Malloc disk", 00:17:53.875 "block_size": 512, 00:17:53.875 "num_blocks": 65536, 00:17:53.875 "uuid": "f5ddea98-f21e-4537-8524-3a161b00aee6", 00:17:53.875 "assigned_rate_limits": { 00:17:53.875 "rw_ios_per_sec": 0, 00:17:53.875 "rw_mbytes_per_sec": 0, 00:17:53.875 "r_mbytes_per_sec": 0, 00:17:53.875 "w_mbytes_per_sec": 0 00:17:53.875 }, 00:17:53.875 "claimed": true, 00:17:53.875 "claim_type": "exclusive_write", 00:17:53.875 "zoned": false, 00:17:53.875 "supported_io_types": { 00:17:53.875 "read": true, 00:17:53.875 "write": true, 00:17:53.875 "unmap": true, 00:17:53.875 "flush": true, 00:17:53.875 "reset": true, 00:17:53.875 "nvme_admin": false, 00:17:53.875 "nvme_io": false, 00:17:53.875 "nvme_io_md": false, 00:17:53.875 "write_zeroes": true, 00:17:53.875 "zcopy": true, 00:17:53.875 "get_zone_info": false, 00:17:53.875 "zone_management": false, 00:17:53.875 "zone_append": false, 00:17:53.875 "compare": false, 00:17:53.875 "compare_and_write": false, 00:17:53.875 "abort": true, 00:17:53.875 "seek_hole": false, 00:17:53.875 "seek_data": false, 00:17:53.875 "copy": true, 00:17:53.875 "nvme_iov_md": false 00:17:53.875 }, 00:17:53.875 "memory_domains": [ 00:17:53.875 { 00:17:53.875 "dma_device_id": "system", 00:17:53.875 "dma_device_type": 1 00:17:53.875 }, 00:17:53.875 { 00:17:53.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.875 "dma_device_type": 2 00:17:53.875 } 00:17:53.875 ], 00:17:53.875 "driver_specific": {} 00:17:53.875 }' 00:17:53.876 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.876 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.876 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.876 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:54.134 10:25:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.393 [2024-07-15 10:25:19.004436] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.393 [2024-07-15 10:25:19.004455] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.393 [2024-07-15 10:25:19.004489] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.393 [2024-07-15 10:25:19.004669] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.393 [2024-07-15 10:25:19.004676] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfaef40 name Existed_Raid, state offline 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1837631 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1837631 ']' 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 1837631 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1837631 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1837631' 00:17:54.393 killing process with pid 1837631 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 1837631 00:17:54.393 [2024-07-15 10:25:19.061314] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.393 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 1837631 00:17:54.393 [2024-07-15 10:25:19.090632] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.652 10:25:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:54.652 00:17:54.652 real 0m24.317s 00:17:54.652 user 0m44.415s 00:17:54.652 sys 0m4.623s 00:17:54.652 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.652 10:25:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.652 ************************************ 00:17:54.652 END TEST raid_state_function_test_sb 00:17:54.652 ************************************ 00:17:54.652 10:25:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:54.652 10:25:19 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:54.652 10:25:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:54.652 10:25:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.652 10:25:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.652 ************************************ 00:17:54.652 START TEST raid_superblock_test 00:17:54.652 ************************************ 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=1842531 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 1842531 /var/tmp/spdk-raid.sock 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 1842531 ']' 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:54.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.652 10:25:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.652 [2024-07-15 10:25:19.396085] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:54.652 [2024-07-15 10:25:19.396129] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842531 ] 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:54.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:54.911 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:54.911 [2024-07-15 10:25:19.487033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.911 [2024-07-15 10:25:19.559220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.911 [2024-07-15 10:25:19.611967] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.911 [2024-07-15 10:25:19.611996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:55.478 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:55.736 malloc1 00:17:55.736 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.736 [2024-07-15 10:25:20.476122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.736 [2024-07-15 10:25:20.476157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.737 [2024-07-15 10:25:20.476171] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dba2f0 00:17:55.737 [2024-07-15 10:25:20.476183] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.737 [2024-07-15 10:25:20.477317] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.737 [2024-07-15 10:25:20.477340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.737 pt1 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:55.737 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:55.996 malloc2 00:17:55.996 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.255 [2024-07-15 10:25:20.804645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.255 [2024-07-15 10:25:20.804677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.255 [2024-07-15 10:25:20.804689] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dbb6d0 00:17:56.255 [2024-07-15 10:25:20.804697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.255 [2024-07-15 10:25:20.805737] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.255 [2024-07-15 10:25:20.805758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.255 pt2 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:56.255 malloc3 00:17:56.255 10:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:56.513 [2024-07-15 10:25:21.149006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:56.513 [2024-07-15 10:25:21.149041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.513 [2024-07-15 10:25:21.149053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f546b0 00:17:56.513 [2024-07-15 10:25:21.149061] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.513 [2024-07-15 10:25:21.150115] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.513 [2024-07-15 10:25:21.150137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:56.513 pt3 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:56.513 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:56.771 malloc4 00:17:56.771 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:56.771 [2024-07-15 10:25:21.465375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:56.771 [2024-07-15 10:25:21.465408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.771 [2024-07-15 10:25:21.465421] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f52370 00:17:56.771 [2024-07-15 10:25:21.465429] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.771 [2024-07-15 10:25:21.466382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.771 [2024-07-15 10:25:21.466403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:56.771 pt4 00:17:56.771 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:56.771 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:56.771 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:57.063 [2024-07-15 10:25:21.617785] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.063 [2024-07-15 10:25:21.618597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.063 [2024-07-15 10:25:21.618633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:57.063 [2024-07-15 10:25:21.618662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:57.063 [2024-07-15 10:25:21.618772] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db3560 00:17:57.063 [2024-07-15 10:25:21.618779] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:57.063 [2024-07-15 10:25:21.618909] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f53680 00:17:57.063 [2024-07-15 10:25:21.619008] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db3560 00:17:57.063 [2024-07-15 10:25:21.619014] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db3560 00:17:57.063 [2024-07-15 10:25:21.619075] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.063 "name": "raid_bdev1", 00:17:57.063 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:17:57.063 "strip_size_kb": 0, 00:17:57.063 "state": "online", 00:17:57.063 "raid_level": "raid1", 00:17:57.063 "superblock": true, 00:17:57.063 "num_base_bdevs": 4, 00:17:57.063 "num_base_bdevs_discovered": 4, 00:17:57.063 "num_base_bdevs_operational": 4, 00:17:57.063 "base_bdevs_list": [ 00:17:57.063 { 00:17:57.063 "name": "pt1", 00:17:57.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.063 "is_configured": true, 00:17:57.063 "data_offset": 2048, 00:17:57.063 "data_size": 63488 00:17:57.063 }, 00:17:57.063 { 00:17:57.063 "name": "pt2", 00:17:57.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.063 "is_configured": true, 00:17:57.063 "data_offset": 2048, 00:17:57.063 "data_size": 63488 00:17:57.063 }, 00:17:57.063 { 00:17:57.063 "name": "pt3", 00:17:57.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.063 "is_configured": true, 00:17:57.063 "data_offset": 2048, 00:17:57.063 "data_size": 63488 00:17:57.063 }, 00:17:57.063 { 00:17:57.063 "name": "pt4", 00:17:57.063 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.063 "is_configured": true, 00:17:57.063 "data_offset": 2048, 00:17:57.063 "data_size": 63488 00:17:57.063 } 00:17:57.063 ] 00:17:57.063 }' 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.063 10:25:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:57.629 [2024-07-15 10:25:22.396000] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.629 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:57.629 "name": "raid_bdev1", 00:17:57.629 "aliases": [ 00:17:57.629 "c33794a3-8735-488c-8385-003b118da85e" 00:17:57.629 ], 00:17:57.629 "product_name": "Raid Volume", 00:17:57.629 "block_size": 512, 00:17:57.629 "num_blocks": 63488, 00:17:57.629 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:17:57.629 "assigned_rate_limits": { 00:17:57.629 "rw_ios_per_sec": 0, 00:17:57.629 "rw_mbytes_per_sec": 0, 00:17:57.629 "r_mbytes_per_sec": 0, 00:17:57.629 "w_mbytes_per_sec": 0 00:17:57.629 }, 00:17:57.629 "claimed": false, 00:17:57.629 "zoned": false, 00:17:57.629 "supported_io_types": { 00:17:57.629 "read": true, 00:17:57.629 "write": true, 00:17:57.629 "unmap": false, 00:17:57.629 "flush": false, 00:17:57.629 "reset": true, 00:17:57.629 "nvme_admin": false, 00:17:57.629 "nvme_io": false, 00:17:57.629 "nvme_io_md": false, 00:17:57.629 "write_zeroes": true, 00:17:57.629 "zcopy": false, 00:17:57.629 "get_zone_info": false, 00:17:57.629 "zone_management": false, 00:17:57.629 "zone_append": false, 00:17:57.629 "compare": false, 00:17:57.629 "compare_and_write": false, 00:17:57.629 "abort": false, 00:17:57.629 "seek_hole": false, 00:17:57.629 "seek_data": false, 00:17:57.629 "copy": false, 00:17:57.629 "nvme_iov_md": false 00:17:57.629 }, 00:17:57.629 "memory_domains": [ 00:17:57.629 { 00:17:57.629 "dma_device_id": "system", 00:17:57.629 "dma_device_type": 1 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.629 "dma_device_type": 2 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "system", 00:17:57.629 "dma_device_type": 1 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.629 "dma_device_type": 2 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "system", 00:17:57.629 "dma_device_type": 1 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.629 "dma_device_type": 2 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "system", 00:17:57.629 "dma_device_type": 1 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.629 "dma_device_type": 2 00:17:57.629 } 00:17:57.629 ], 00:17:57.629 "driver_specific": { 00:17:57.629 "raid": { 00:17:57.629 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:17:57.629 "strip_size_kb": 0, 00:17:57.629 "state": "online", 00:17:57.629 "raid_level": "raid1", 00:17:57.629 "superblock": true, 00:17:57.629 "num_base_bdevs": 4, 00:17:57.629 "num_base_bdevs_discovered": 4, 00:17:57.629 "num_base_bdevs_operational": 4, 00:17:57.629 "base_bdevs_list": [ 00:17:57.629 { 00:17:57.629 "name": "pt1", 00:17:57.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.629 "is_configured": true, 00:17:57.629 "data_offset": 2048, 00:17:57.629 "data_size": 63488 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "name": "pt2", 00:17:57.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.629 "is_configured": true, 00:17:57.629 "data_offset": 2048, 00:17:57.629 "data_size": 63488 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "name": "pt3", 00:17:57.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.630 "is_configured": true, 00:17:57.630 "data_offset": 2048, 00:17:57.630 "data_size": 63488 00:17:57.630 }, 00:17:57.630 { 00:17:57.630 "name": "pt4", 00:17:57.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:57.630 "is_configured": true, 00:17:57.630 "data_offset": 2048, 00:17:57.630 "data_size": 63488 00:17:57.630 } 00:17:57.630 ] 00:17:57.630 } 00:17:57.630 } 00:17:57.630 }' 00:17:57.630 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:57.888 pt2 00:17:57.888 pt3 00:17:57.888 pt4' 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.888 "name": "pt1", 00:17:57.888 "aliases": [ 00:17:57.888 "00000000-0000-0000-0000-000000000001" 00:17:57.888 ], 00:17:57.888 "product_name": "passthru", 00:17:57.888 "block_size": 512, 00:17:57.888 "num_blocks": 65536, 00:17:57.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.888 "assigned_rate_limits": { 00:17:57.888 "rw_ios_per_sec": 0, 00:17:57.888 "rw_mbytes_per_sec": 0, 00:17:57.888 "r_mbytes_per_sec": 0, 00:17:57.888 "w_mbytes_per_sec": 0 00:17:57.888 }, 00:17:57.888 "claimed": true, 00:17:57.888 "claim_type": "exclusive_write", 00:17:57.888 "zoned": false, 00:17:57.888 "supported_io_types": { 00:17:57.888 "read": true, 00:17:57.888 "write": true, 00:17:57.888 "unmap": true, 00:17:57.888 "flush": true, 00:17:57.888 "reset": true, 00:17:57.888 "nvme_admin": false, 00:17:57.888 "nvme_io": false, 00:17:57.888 "nvme_io_md": false, 00:17:57.888 "write_zeroes": true, 00:17:57.888 "zcopy": true, 00:17:57.888 "get_zone_info": false, 00:17:57.888 "zone_management": false, 00:17:57.888 "zone_append": false, 00:17:57.888 "compare": false, 00:17:57.888 "compare_and_write": false, 00:17:57.888 "abort": true, 00:17:57.888 "seek_hole": false, 00:17:57.888 "seek_data": false, 00:17:57.888 "copy": true, 00:17:57.888 "nvme_iov_md": false 00:17:57.888 }, 00:17:57.888 "memory_domains": [ 00:17:57.888 { 00:17:57.888 "dma_device_id": "system", 00:17:57.888 "dma_device_type": 1 00:17:57.888 }, 00:17:57.888 { 00:17:57.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.888 "dma_device_type": 2 00:17:57.888 } 00:17:57.888 ], 00:17:57.888 "driver_specific": { 00:17:57.888 "passthru": { 00:17:57.888 "name": "pt1", 00:17:57.888 "base_bdev_name": "malloc1" 00:17:57.888 } 00:17:57.888 } 00:17:57.888 }' 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.888 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.146 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.405 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.405 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.405 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:58.405 10:25:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.405 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.405 "name": "pt2", 00:17:58.405 "aliases": [ 00:17:58.405 "00000000-0000-0000-0000-000000000002" 00:17:58.405 ], 00:17:58.405 "product_name": "passthru", 00:17:58.405 "block_size": 512, 00:17:58.405 "num_blocks": 65536, 00:17:58.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.405 "assigned_rate_limits": { 00:17:58.405 "rw_ios_per_sec": 0, 00:17:58.405 "rw_mbytes_per_sec": 0, 00:17:58.405 "r_mbytes_per_sec": 0, 00:17:58.405 "w_mbytes_per_sec": 0 00:17:58.405 }, 00:17:58.405 "claimed": true, 00:17:58.405 "claim_type": "exclusive_write", 00:17:58.405 "zoned": false, 00:17:58.405 "supported_io_types": { 00:17:58.405 "read": true, 00:17:58.405 "write": true, 00:17:58.405 "unmap": true, 00:17:58.405 "flush": true, 00:17:58.405 "reset": true, 00:17:58.405 "nvme_admin": false, 00:17:58.405 "nvme_io": false, 00:17:58.405 "nvme_io_md": false, 00:17:58.405 "write_zeroes": true, 00:17:58.405 "zcopy": true, 00:17:58.405 "get_zone_info": false, 00:17:58.405 "zone_management": false, 00:17:58.405 "zone_append": false, 00:17:58.405 "compare": false, 00:17:58.405 "compare_and_write": false, 00:17:58.406 "abort": true, 00:17:58.406 "seek_hole": false, 00:17:58.406 "seek_data": false, 00:17:58.406 "copy": true, 00:17:58.406 "nvme_iov_md": false 00:17:58.406 }, 00:17:58.406 "memory_domains": [ 00:17:58.406 { 00:17:58.406 "dma_device_id": "system", 00:17:58.406 "dma_device_type": 1 00:17:58.406 }, 00:17:58.406 { 00:17:58.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.406 "dma_device_type": 2 00:17:58.406 } 00:17:58.406 ], 00:17:58.406 "driver_specific": { 00:17:58.406 "passthru": { 00:17:58.406 "name": "pt2", 00:17:58.406 "base_bdev_name": "malloc2" 00:17:58.406 } 00:17:58.406 } 00:17:58.406 }' 00:17:58.406 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.406 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.406 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:58.664 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.922 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.922 "name": "pt3", 00:17:58.922 "aliases": [ 00:17:58.922 "00000000-0000-0000-0000-000000000003" 00:17:58.922 ], 00:17:58.922 "product_name": "passthru", 00:17:58.922 "block_size": 512, 00:17:58.922 "num_blocks": 65536, 00:17:58.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.922 "assigned_rate_limits": { 00:17:58.922 "rw_ios_per_sec": 0, 00:17:58.922 "rw_mbytes_per_sec": 0, 00:17:58.922 "r_mbytes_per_sec": 0, 00:17:58.922 "w_mbytes_per_sec": 0 00:17:58.922 }, 00:17:58.922 "claimed": true, 00:17:58.922 "claim_type": "exclusive_write", 00:17:58.922 "zoned": false, 00:17:58.922 "supported_io_types": { 00:17:58.922 "read": true, 00:17:58.922 "write": true, 00:17:58.922 "unmap": true, 00:17:58.922 "flush": true, 00:17:58.922 "reset": true, 00:17:58.922 "nvme_admin": false, 00:17:58.922 "nvme_io": false, 00:17:58.922 "nvme_io_md": false, 00:17:58.922 "write_zeroes": true, 00:17:58.922 "zcopy": true, 00:17:58.922 "get_zone_info": false, 00:17:58.922 "zone_management": false, 00:17:58.922 "zone_append": false, 00:17:58.922 "compare": false, 00:17:58.922 "compare_and_write": false, 00:17:58.922 "abort": true, 00:17:58.922 "seek_hole": false, 00:17:58.922 "seek_data": false, 00:17:58.922 "copy": true, 00:17:58.922 "nvme_iov_md": false 00:17:58.922 }, 00:17:58.922 "memory_domains": [ 00:17:58.922 { 00:17:58.922 "dma_device_id": "system", 00:17:58.922 "dma_device_type": 1 00:17:58.922 }, 00:17:58.922 { 00:17:58.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.922 "dma_device_type": 2 00:17:58.922 } 00:17:58.922 ], 00:17:58.922 "driver_specific": { 00:17:58.922 "passthru": { 00:17:58.922 "name": "pt3", 00:17:58.922 "base_bdev_name": "malloc3" 00:17:58.922 } 00:17:58.922 } 00:17:58.922 }' 00:17:58.922 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.922 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.922 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.922 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.180 10:25:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:17:59.438 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.438 "name": "pt4", 00:17:59.438 "aliases": [ 00:17:59.438 "00000000-0000-0000-0000-000000000004" 00:17:59.438 ], 00:17:59.438 "product_name": "passthru", 00:17:59.438 "block_size": 512, 00:17:59.438 "num_blocks": 65536, 00:17:59.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.438 "assigned_rate_limits": { 00:17:59.438 "rw_ios_per_sec": 0, 00:17:59.438 "rw_mbytes_per_sec": 0, 00:17:59.438 "r_mbytes_per_sec": 0, 00:17:59.438 "w_mbytes_per_sec": 0 00:17:59.438 }, 00:17:59.438 "claimed": true, 00:17:59.438 "claim_type": "exclusive_write", 00:17:59.438 "zoned": false, 00:17:59.438 "supported_io_types": { 00:17:59.438 "read": true, 00:17:59.438 "write": true, 00:17:59.438 "unmap": true, 00:17:59.438 "flush": true, 00:17:59.438 "reset": true, 00:17:59.438 "nvme_admin": false, 00:17:59.438 "nvme_io": false, 00:17:59.438 "nvme_io_md": false, 00:17:59.438 "write_zeroes": true, 00:17:59.438 "zcopy": true, 00:17:59.438 "get_zone_info": false, 00:17:59.438 "zone_management": false, 00:17:59.438 "zone_append": false, 00:17:59.438 "compare": false, 00:17:59.438 "compare_and_write": false, 00:17:59.438 "abort": true, 00:17:59.438 "seek_hole": false, 00:17:59.438 "seek_data": false, 00:17:59.438 "copy": true, 00:17:59.438 "nvme_iov_md": false 00:17:59.438 }, 00:17:59.438 "memory_domains": [ 00:17:59.438 { 00:17:59.438 "dma_device_id": "system", 00:17:59.438 "dma_device_type": 1 00:17:59.438 }, 00:17:59.438 { 00:17:59.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.438 "dma_device_type": 2 00:17:59.438 } 00:17:59.438 ], 00:17:59.438 "driver_specific": { 00:17:59.438 "passthru": { 00:17:59.438 "name": "pt4", 00:17:59.438 "base_bdev_name": "malloc4" 00:17:59.438 } 00:17:59.438 } 00:17:59.438 }' 00:17:59.438 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.438 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.438 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.438 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.438 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.439 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.439 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:59.697 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:59.955 [2024-07-15 10:25:24.509444] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.955 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c33794a3-8735-488c-8385-003b118da85e 00:17:59.955 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c33794a3-8735-488c-8385-003b118da85e ']' 00:17:59.955 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:59.955 [2024-07-15 10:25:24.681681] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.955 [2024-07-15 10:25:24.681695] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.955 [2024-07-15 10:25:24.681727] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.955 [2024-07-15 10:25:24.681779] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.955 [2024-07-15 10:25:24.681787] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db3560 name raid_bdev1, state offline 00:17:59.955 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.955 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:00.213 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:00.213 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:00.213 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.213 10:25:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:00.471 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.471 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:00.471 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.471 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:00.728 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.728 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:00.985 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:01.243 [2024-07-15 10:25:25.868843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:01.243 [2024-07-15 10:25:25.869799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:01.243 [2024-07-15 10:25:25.869828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:01.243 [2024-07-15 10:25:25.869849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:01.243 [2024-07-15 10:25:25.869878] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:01.243 [2024-07-15 10:25:25.869913] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:01.243 [2024-07-15 10:25:25.869928] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:01.243 [2024-07-15 10:25:25.869941] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:01.243 [2024-07-15 10:25:25.869952] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.243 [2024-07-15 10:25:25.869958] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f5dd50 name raid_bdev1, state configuring 00:18:01.243 request: 00:18:01.243 { 00:18:01.243 "name": "raid_bdev1", 00:18:01.243 "raid_level": "raid1", 00:18:01.243 "base_bdevs": [ 00:18:01.243 "malloc1", 00:18:01.243 "malloc2", 00:18:01.243 "malloc3", 00:18:01.243 "malloc4" 00:18:01.243 ], 00:18:01.243 "superblock": false, 00:18:01.243 "method": "bdev_raid_create", 00:18:01.243 "req_id": 1 00:18:01.243 } 00:18:01.243 Got JSON-RPC error response 00:18:01.243 response: 00:18:01.243 { 00:18:01.243 "code": -17, 00:18:01.243 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:01.243 } 00:18:01.243 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:01.243 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:01.243 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:01.243 10:25:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:01.243 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.243 10:25:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.502 [2024-07-15 10:25:26.213706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.502 [2024-07-15 10:25:26.213735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.502 [2024-07-15 10:25:26.213749] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f5d3f0 00:18:01.502 [2024-07-15 10:25:26.213758] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.502 [2024-07-15 10:25:26.214884] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.502 [2024-07-15 10:25:26.214916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.502 [2024-07-15 10:25:26.214960] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:01.502 [2024-07-15 10:25:26.214978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.502 pt1 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.502 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.760 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.760 "name": "raid_bdev1", 00:18:01.760 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:01.760 "strip_size_kb": 0, 00:18:01.760 "state": "configuring", 00:18:01.760 "raid_level": "raid1", 00:18:01.760 "superblock": true, 00:18:01.760 "num_base_bdevs": 4, 00:18:01.760 "num_base_bdevs_discovered": 1, 00:18:01.760 "num_base_bdevs_operational": 4, 00:18:01.760 "base_bdevs_list": [ 00:18:01.760 { 00:18:01.760 "name": "pt1", 00:18:01.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.760 "is_configured": true, 00:18:01.760 "data_offset": 2048, 00:18:01.760 "data_size": 63488 00:18:01.760 }, 00:18:01.760 { 00:18:01.760 "name": null, 00:18:01.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.760 "is_configured": false, 00:18:01.760 "data_offset": 2048, 00:18:01.760 "data_size": 63488 00:18:01.760 }, 00:18:01.760 { 00:18:01.760 "name": null, 00:18:01.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.760 "is_configured": false, 00:18:01.760 "data_offset": 2048, 00:18:01.760 "data_size": 63488 00:18:01.760 }, 00:18:01.760 { 00:18:01.760 "name": null, 00:18:01.760 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.760 "is_configured": false, 00:18:01.760 "data_offset": 2048, 00:18:01.760 "data_size": 63488 00:18:01.760 } 00:18:01.760 ] 00:18:01.760 }' 00:18:01.760 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.760 10:25:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.323 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:18:02.323 10:25:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.323 [2024-07-15 10:25:27.011769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.323 [2024-07-15 10:25:27.011803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.323 [2024-07-15 10:25:27.011818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dba520 00:18:02.323 [2024-07-15 10:25:27.011831] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.323 [2024-07-15 10:25:27.012079] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.323 [2024-07-15 10:25:27.012093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.323 [2024-07-15 10:25:27.012136] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.323 [2024-07-15 10:25:27.012149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.323 pt2 00:18:02.323 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:02.630 [2024-07-15 10:25:27.184218] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.630 "name": "raid_bdev1", 00:18:02.630 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:02.630 "strip_size_kb": 0, 00:18:02.630 "state": "configuring", 00:18:02.630 "raid_level": "raid1", 00:18:02.630 "superblock": true, 00:18:02.630 "num_base_bdevs": 4, 00:18:02.630 "num_base_bdevs_discovered": 1, 00:18:02.630 "num_base_bdevs_operational": 4, 00:18:02.630 "base_bdevs_list": [ 00:18:02.630 { 00:18:02.630 "name": "pt1", 00:18:02.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.630 "is_configured": true, 00:18:02.630 "data_offset": 2048, 00:18:02.630 "data_size": 63488 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "name": null, 00:18:02.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.630 "is_configured": false, 00:18:02.630 "data_offset": 2048, 00:18:02.630 "data_size": 63488 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "name": null, 00:18:02.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.630 "is_configured": false, 00:18:02.630 "data_offset": 2048, 00:18:02.630 "data_size": 63488 00:18:02.630 }, 00:18:02.630 { 00:18:02.630 "name": null, 00:18:02.630 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.630 "is_configured": false, 00:18:02.630 "data_offset": 2048, 00:18:02.630 "data_size": 63488 00:18:02.630 } 00:18:02.630 ] 00:18:02.630 }' 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.630 10:25:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.193 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:03.193 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:03.193 10:25:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.450 [2024-07-15 10:25:27.990295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.450 [2024-07-15 10:25:27.990333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.450 [2024-07-15 10:25:27.990349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dba750 00:18:03.450 [2024-07-15 10:25:27.990357] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.450 [2024-07-15 10:25:27.990594] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.450 [2024-07-15 10:25:27.990607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.450 [2024-07-15 10:25:27.990649] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.450 [2024-07-15 10:25:27.990661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.450 pt2 00:18:03.450 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:03.450 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:03.450 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.450 [2024-07-15 10:25:28.158726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.450 [2024-07-15 10:25:28.158756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.450 [2024-07-15 10:25:28.158768] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db3fa0 00:18:03.450 [2024-07-15 10:25:28.158776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.450 [2024-07-15 10:25:28.158990] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.450 [2024-07-15 10:25:28.159002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.450 [2024-07-15 10:25:28.159040] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:03.450 [2024-07-15 10:25:28.159063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.450 pt3 00:18:03.450 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:03.450 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:03.450 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.708 [2024-07-15 10:25:28.327178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.708 [2024-07-15 10:25:28.327205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.708 [2024-07-15 10:25:28.327218] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db4b40 00:18:03.708 [2024-07-15 10:25:28.327226] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.708 [2024-07-15 10:25:28.327436] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.708 [2024-07-15 10:25:28.327448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.708 [2024-07-15 10:25:28.327484] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:03.708 [2024-07-15 10:25:28.327497] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.708 [2024-07-15 10:25:28.327578] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db1fc0 00:18:03.708 [2024-07-15 10:25:28.327587] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:03.708 [2024-07-15 10:25:28.327699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db18b0 00:18:03.708 [2024-07-15 10:25:28.327789] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db1fc0 00:18:03.708 [2024-07-15 10:25:28.327796] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db1fc0 00:18:03.708 [2024-07-15 10:25:28.327859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.708 pt4 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.708 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.965 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.965 "name": "raid_bdev1", 00:18:03.965 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:03.965 "strip_size_kb": 0, 00:18:03.965 "state": "online", 00:18:03.965 "raid_level": "raid1", 00:18:03.965 "superblock": true, 00:18:03.965 "num_base_bdevs": 4, 00:18:03.965 "num_base_bdevs_discovered": 4, 00:18:03.965 "num_base_bdevs_operational": 4, 00:18:03.965 "base_bdevs_list": [ 00:18:03.965 { 00:18:03.965 "name": "pt1", 00:18:03.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.965 "is_configured": true, 00:18:03.965 "data_offset": 2048, 00:18:03.965 "data_size": 63488 00:18:03.965 }, 00:18:03.965 { 00:18:03.965 "name": "pt2", 00:18:03.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.965 "is_configured": true, 00:18:03.965 "data_offset": 2048, 00:18:03.965 "data_size": 63488 00:18:03.965 }, 00:18:03.965 { 00:18:03.965 "name": "pt3", 00:18:03.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.965 "is_configured": true, 00:18:03.965 "data_offset": 2048, 00:18:03.965 "data_size": 63488 00:18:03.965 }, 00:18:03.965 { 00:18:03.965 "name": "pt4", 00:18:03.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.965 "is_configured": true, 00:18:03.965 "data_offset": 2048, 00:18:03.965 "data_size": 63488 00:18:03.965 } 00:18:03.965 ] 00:18:03.965 }' 00:18:03.965 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.965 10:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.223 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.223 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:04.223 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:04.223 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:04.223 10:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:04.223 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:04.223 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:04.223 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:04.480 [2024-07-15 10:25:29.149482] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.480 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:04.480 "name": "raid_bdev1", 00:18:04.480 "aliases": [ 00:18:04.480 "c33794a3-8735-488c-8385-003b118da85e" 00:18:04.480 ], 00:18:04.480 "product_name": "Raid Volume", 00:18:04.480 "block_size": 512, 00:18:04.480 "num_blocks": 63488, 00:18:04.480 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:04.480 "assigned_rate_limits": { 00:18:04.480 "rw_ios_per_sec": 0, 00:18:04.480 "rw_mbytes_per_sec": 0, 00:18:04.480 "r_mbytes_per_sec": 0, 00:18:04.480 "w_mbytes_per_sec": 0 00:18:04.480 }, 00:18:04.480 "claimed": false, 00:18:04.480 "zoned": false, 00:18:04.480 "supported_io_types": { 00:18:04.480 "read": true, 00:18:04.480 "write": true, 00:18:04.480 "unmap": false, 00:18:04.480 "flush": false, 00:18:04.480 "reset": true, 00:18:04.480 "nvme_admin": false, 00:18:04.480 "nvme_io": false, 00:18:04.480 "nvme_io_md": false, 00:18:04.480 "write_zeroes": true, 00:18:04.480 "zcopy": false, 00:18:04.480 "get_zone_info": false, 00:18:04.480 "zone_management": false, 00:18:04.480 "zone_append": false, 00:18:04.480 "compare": false, 00:18:04.480 "compare_and_write": false, 00:18:04.480 "abort": false, 00:18:04.480 "seek_hole": false, 00:18:04.480 "seek_data": false, 00:18:04.480 "copy": false, 00:18:04.480 "nvme_iov_md": false 00:18:04.480 }, 00:18:04.480 "memory_domains": [ 00:18:04.480 { 00:18:04.480 "dma_device_id": "system", 00:18:04.480 "dma_device_type": 1 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.480 "dma_device_type": 2 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "system", 00:18:04.480 "dma_device_type": 1 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.480 "dma_device_type": 2 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "system", 00:18:04.480 "dma_device_type": 1 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.480 "dma_device_type": 2 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "system", 00:18:04.480 "dma_device_type": 1 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.480 "dma_device_type": 2 00:18:04.480 } 00:18:04.480 ], 00:18:04.480 "driver_specific": { 00:18:04.480 "raid": { 00:18:04.480 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:04.480 "strip_size_kb": 0, 00:18:04.480 "state": "online", 00:18:04.480 "raid_level": "raid1", 00:18:04.480 "superblock": true, 00:18:04.480 "num_base_bdevs": 4, 00:18:04.480 "num_base_bdevs_discovered": 4, 00:18:04.480 "num_base_bdevs_operational": 4, 00:18:04.480 "base_bdevs_list": [ 00:18:04.480 { 00:18:04.480 "name": "pt1", 00:18:04.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.480 "is_configured": true, 00:18:04.480 "data_offset": 2048, 00:18:04.480 "data_size": 63488 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "name": "pt2", 00:18:04.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.480 "is_configured": true, 00:18:04.480 "data_offset": 2048, 00:18:04.480 "data_size": 63488 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "name": "pt3", 00:18:04.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.480 "is_configured": true, 00:18:04.480 "data_offset": 2048, 00:18:04.480 "data_size": 63488 00:18:04.480 }, 00:18:04.480 { 00:18:04.480 "name": "pt4", 00:18:04.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.480 "is_configured": true, 00:18:04.480 "data_offset": 2048, 00:18:04.480 "data_size": 63488 00:18:04.481 } 00:18:04.481 ] 00:18:04.481 } 00:18:04.481 } 00:18:04.481 }' 00:18:04.481 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:04.481 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:04.481 pt2 00:18:04.481 pt3 00:18:04.481 pt4' 00:18:04.481 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.481 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:04.481 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:04.738 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:04.738 "name": "pt1", 00:18:04.738 "aliases": [ 00:18:04.738 "00000000-0000-0000-0000-000000000001" 00:18:04.738 ], 00:18:04.738 "product_name": "passthru", 00:18:04.738 "block_size": 512, 00:18:04.738 "num_blocks": 65536, 00:18:04.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.738 "assigned_rate_limits": { 00:18:04.738 "rw_ios_per_sec": 0, 00:18:04.738 "rw_mbytes_per_sec": 0, 00:18:04.738 "r_mbytes_per_sec": 0, 00:18:04.738 "w_mbytes_per_sec": 0 00:18:04.738 }, 00:18:04.738 "claimed": true, 00:18:04.738 "claim_type": "exclusive_write", 00:18:04.738 "zoned": false, 00:18:04.738 "supported_io_types": { 00:18:04.738 "read": true, 00:18:04.738 "write": true, 00:18:04.738 "unmap": true, 00:18:04.738 "flush": true, 00:18:04.738 "reset": true, 00:18:04.738 "nvme_admin": false, 00:18:04.738 "nvme_io": false, 00:18:04.738 "nvme_io_md": false, 00:18:04.738 "write_zeroes": true, 00:18:04.738 "zcopy": true, 00:18:04.738 "get_zone_info": false, 00:18:04.738 "zone_management": false, 00:18:04.738 "zone_append": false, 00:18:04.738 "compare": false, 00:18:04.738 "compare_and_write": false, 00:18:04.738 "abort": true, 00:18:04.738 "seek_hole": false, 00:18:04.738 "seek_data": false, 00:18:04.738 "copy": true, 00:18:04.738 "nvme_iov_md": false 00:18:04.738 }, 00:18:04.738 "memory_domains": [ 00:18:04.738 { 00:18:04.738 "dma_device_id": "system", 00:18:04.738 "dma_device_type": 1 00:18:04.738 }, 00:18:04.738 { 00:18:04.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.738 "dma_device_type": 2 00:18:04.738 } 00:18:04.738 ], 00:18:04.738 "driver_specific": { 00:18:04.738 "passthru": { 00:18:04.738 "name": "pt1", 00:18:04.738 "base_bdev_name": "malloc1" 00:18:04.738 } 00:18:04.738 } 00:18:04.738 }' 00:18:04.738 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.738 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:04.738 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:04.738 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.738 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:04.995 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.253 "name": "pt2", 00:18:05.253 "aliases": [ 00:18:05.253 "00000000-0000-0000-0000-000000000002" 00:18:05.253 ], 00:18:05.253 "product_name": "passthru", 00:18:05.253 "block_size": 512, 00:18:05.253 "num_blocks": 65536, 00:18:05.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.253 "assigned_rate_limits": { 00:18:05.253 "rw_ios_per_sec": 0, 00:18:05.253 "rw_mbytes_per_sec": 0, 00:18:05.253 "r_mbytes_per_sec": 0, 00:18:05.253 "w_mbytes_per_sec": 0 00:18:05.253 }, 00:18:05.253 "claimed": true, 00:18:05.253 "claim_type": "exclusive_write", 00:18:05.253 "zoned": false, 00:18:05.253 "supported_io_types": { 00:18:05.253 "read": true, 00:18:05.253 "write": true, 00:18:05.253 "unmap": true, 00:18:05.253 "flush": true, 00:18:05.253 "reset": true, 00:18:05.253 "nvme_admin": false, 00:18:05.253 "nvme_io": false, 00:18:05.253 "nvme_io_md": false, 00:18:05.253 "write_zeroes": true, 00:18:05.253 "zcopy": true, 00:18:05.253 "get_zone_info": false, 00:18:05.253 "zone_management": false, 00:18:05.253 "zone_append": false, 00:18:05.253 "compare": false, 00:18:05.253 "compare_and_write": false, 00:18:05.253 "abort": true, 00:18:05.253 "seek_hole": false, 00:18:05.253 "seek_data": false, 00:18:05.253 "copy": true, 00:18:05.253 "nvme_iov_md": false 00:18:05.253 }, 00:18:05.253 "memory_domains": [ 00:18:05.253 { 00:18:05.253 "dma_device_id": "system", 00:18:05.253 "dma_device_type": 1 00:18:05.253 }, 00:18:05.253 { 00:18:05.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.253 "dma_device_type": 2 00:18:05.253 } 00:18:05.253 ], 00:18:05.253 "driver_specific": { 00:18:05.253 "passthru": { 00:18:05.253 "name": "pt2", 00:18:05.253 "base_bdev_name": "malloc2" 00:18:05.253 } 00:18:05.253 } 00:18:05.253 }' 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.253 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.254 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.254 10:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.254 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.254 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.511 "name": "pt3", 00:18:05.511 "aliases": [ 00:18:05.511 "00000000-0000-0000-0000-000000000003" 00:18:05.511 ], 00:18:05.511 "product_name": "passthru", 00:18:05.511 "block_size": 512, 00:18:05.511 "num_blocks": 65536, 00:18:05.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.511 "assigned_rate_limits": { 00:18:05.511 "rw_ios_per_sec": 0, 00:18:05.511 "rw_mbytes_per_sec": 0, 00:18:05.511 "r_mbytes_per_sec": 0, 00:18:05.511 "w_mbytes_per_sec": 0 00:18:05.511 }, 00:18:05.511 "claimed": true, 00:18:05.511 "claim_type": "exclusive_write", 00:18:05.511 "zoned": false, 00:18:05.511 "supported_io_types": { 00:18:05.511 "read": true, 00:18:05.511 "write": true, 00:18:05.511 "unmap": true, 00:18:05.511 "flush": true, 00:18:05.511 "reset": true, 00:18:05.511 "nvme_admin": false, 00:18:05.511 "nvme_io": false, 00:18:05.511 "nvme_io_md": false, 00:18:05.511 "write_zeroes": true, 00:18:05.511 "zcopy": true, 00:18:05.511 "get_zone_info": false, 00:18:05.511 "zone_management": false, 00:18:05.511 "zone_append": false, 00:18:05.511 "compare": false, 00:18:05.511 "compare_and_write": false, 00:18:05.511 "abort": true, 00:18:05.511 "seek_hole": false, 00:18:05.511 "seek_data": false, 00:18:05.511 "copy": true, 00:18:05.511 "nvme_iov_md": false 00:18:05.511 }, 00:18:05.511 "memory_domains": [ 00:18:05.511 { 00:18:05.511 "dma_device_id": "system", 00:18:05.511 "dma_device_type": 1 00:18:05.511 }, 00:18:05.511 { 00:18:05.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.511 "dma_device_type": 2 00:18:05.511 } 00:18:05.511 ], 00:18:05.511 "driver_specific": { 00:18:05.511 "passthru": { 00:18:05.511 "name": "pt3", 00:18:05.511 "base_bdev_name": "malloc3" 00:18:05.511 } 00:18:05.511 } 00:18:05.511 }' 00:18:05.511 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.769 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.027 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.027 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.027 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:06.027 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.027 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.027 "name": "pt4", 00:18:06.027 "aliases": [ 00:18:06.027 "00000000-0000-0000-0000-000000000004" 00:18:06.027 ], 00:18:06.027 "product_name": "passthru", 00:18:06.027 "block_size": 512, 00:18:06.027 "num_blocks": 65536, 00:18:06.027 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:06.027 "assigned_rate_limits": { 00:18:06.027 "rw_ios_per_sec": 0, 00:18:06.027 "rw_mbytes_per_sec": 0, 00:18:06.027 "r_mbytes_per_sec": 0, 00:18:06.027 "w_mbytes_per_sec": 0 00:18:06.027 }, 00:18:06.027 "claimed": true, 00:18:06.027 "claim_type": "exclusive_write", 00:18:06.027 "zoned": false, 00:18:06.027 "supported_io_types": { 00:18:06.027 "read": true, 00:18:06.027 "write": true, 00:18:06.027 "unmap": true, 00:18:06.027 "flush": true, 00:18:06.027 "reset": true, 00:18:06.027 "nvme_admin": false, 00:18:06.027 "nvme_io": false, 00:18:06.028 "nvme_io_md": false, 00:18:06.028 "write_zeroes": true, 00:18:06.028 "zcopy": true, 00:18:06.028 "get_zone_info": false, 00:18:06.028 "zone_management": false, 00:18:06.028 "zone_append": false, 00:18:06.028 "compare": false, 00:18:06.028 "compare_and_write": false, 00:18:06.028 "abort": true, 00:18:06.028 "seek_hole": false, 00:18:06.028 "seek_data": false, 00:18:06.028 "copy": true, 00:18:06.028 "nvme_iov_md": false 00:18:06.028 }, 00:18:06.028 "memory_domains": [ 00:18:06.028 { 00:18:06.028 "dma_device_id": "system", 00:18:06.028 "dma_device_type": 1 00:18:06.028 }, 00:18:06.028 { 00:18:06.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.028 "dma_device_type": 2 00:18:06.028 } 00:18:06.028 ], 00:18:06.028 "driver_specific": { 00:18:06.028 "passthru": { 00:18:06.028 "name": "pt4", 00:18:06.028 "base_bdev_name": "malloc4" 00:18:06.028 } 00:18:06.028 } 00:18:06.028 }' 00:18:06.028 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.028 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.285 10:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.285 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.285 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:06.541 [2024-07-15 10:25:31.234850] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c33794a3-8735-488c-8385-003b118da85e '!=' c33794a3-8735-488c-8385-003b118da85e ']' 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:06.541 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:06.799 [2024-07-15 10:25:31.411137] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.799 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.057 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.057 "name": "raid_bdev1", 00:18:07.057 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:07.057 "strip_size_kb": 0, 00:18:07.057 "state": "online", 00:18:07.057 "raid_level": "raid1", 00:18:07.057 "superblock": true, 00:18:07.057 "num_base_bdevs": 4, 00:18:07.057 "num_base_bdevs_discovered": 3, 00:18:07.057 "num_base_bdevs_operational": 3, 00:18:07.057 "base_bdevs_list": [ 00:18:07.057 { 00:18:07.057 "name": null, 00:18:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.057 "is_configured": false, 00:18:07.057 "data_offset": 2048, 00:18:07.057 "data_size": 63488 00:18:07.057 }, 00:18:07.057 { 00:18:07.057 "name": "pt2", 00:18:07.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:07.057 "is_configured": true, 00:18:07.057 "data_offset": 2048, 00:18:07.057 "data_size": 63488 00:18:07.057 }, 00:18:07.057 { 00:18:07.057 "name": "pt3", 00:18:07.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:07.057 "is_configured": true, 00:18:07.057 "data_offset": 2048, 00:18:07.057 "data_size": 63488 00:18:07.057 }, 00:18:07.057 { 00:18:07.057 "name": "pt4", 00:18:07.057 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:07.057 "is_configured": true, 00:18:07.057 "data_offset": 2048, 00:18:07.057 "data_size": 63488 00:18:07.057 } 00:18:07.057 ] 00:18:07.057 }' 00:18:07.057 10:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.057 10:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.315 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:07.573 [2024-07-15 10:25:32.217194] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.573 [2024-07-15 10:25:32.217213] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.573 [2024-07-15 10:25:32.217249] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.573 [2024-07-15 10:25:32.217295] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.573 [2024-07-15 10:25:32.217303] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db1fc0 name raid_bdev1, state offline 00:18:07.573 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:07.573 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:07.831 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:08.089 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:08.089 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:08.089 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:08.347 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:08.347 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:08.347 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:08.347 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:08.347 10:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.347 [2024-07-15 10:25:33.035280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.347 [2024-07-15 10:25:33.035316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.347 [2024-07-15 10:25:33.035329] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dba750 00:18:08.347 [2024-07-15 10:25:33.035337] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.347 [2024-07-15 10:25:33.036471] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.347 [2024-07-15 10:25:33.036493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.347 [2024-07-15 10:25:33.036539] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:08.347 [2024-07-15 10:25:33.036557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.347 pt2 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.347 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.606 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.606 "name": "raid_bdev1", 00:18:08.606 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:08.606 "strip_size_kb": 0, 00:18:08.606 "state": "configuring", 00:18:08.606 "raid_level": "raid1", 00:18:08.606 "superblock": true, 00:18:08.606 "num_base_bdevs": 4, 00:18:08.606 "num_base_bdevs_discovered": 1, 00:18:08.606 "num_base_bdevs_operational": 3, 00:18:08.606 "base_bdevs_list": [ 00:18:08.606 { 00:18:08.606 "name": null, 00:18:08.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.606 "is_configured": false, 00:18:08.606 "data_offset": 2048, 00:18:08.606 "data_size": 63488 00:18:08.606 }, 00:18:08.606 { 00:18:08.606 "name": "pt2", 00:18:08.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.606 "is_configured": true, 00:18:08.606 "data_offset": 2048, 00:18:08.606 "data_size": 63488 00:18:08.606 }, 00:18:08.606 { 00:18:08.606 "name": null, 00:18:08.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.606 "is_configured": false, 00:18:08.606 "data_offset": 2048, 00:18:08.606 "data_size": 63488 00:18:08.606 }, 00:18:08.606 { 00:18:08.606 "name": null, 00:18:08.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:08.606 "is_configured": false, 00:18:08.606 "data_offset": 2048, 00:18:08.606 "data_size": 63488 00:18:08.606 } 00:18:08.606 ] 00:18:08.606 }' 00:18:08.606 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.606 10:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.865 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:08.865 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:08.865 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:09.124 [2024-07-15 10:25:33.785221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:09.124 [2024-07-15 10:25:33.785261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.124 [2024-07-15 10:25:33.785275] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db1af0 00:18:09.124 [2024-07-15 10:25:33.785283] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.124 [2024-07-15 10:25:33.785530] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.124 [2024-07-15 10:25:33.785542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:09.124 [2024-07-15 10:25:33.785587] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:09.124 [2024-07-15 10:25:33.785600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:09.124 pt3 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.124 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.382 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.382 "name": "raid_bdev1", 00:18:09.382 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:09.382 "strip_size_kb": 0, 00:18:09.382 "state": "configuring", 00:18:09.382 "raid_level": "raid1", 00:18:09.382 "superblock": true, 00:18:09.382 "num_base_bdevs": 4, 00:18:09.382 "num_base_bdevs_discovered": 2, 00:18:09.382 "num_base_bdevs_operational": 3, 00:18:09.382 "base_bdevs_list": [ 00:18:09.382 { 00:18:09.382 "name": null, 00:18:09.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.382 "is_configured": false, 00:18:09.382 "data_offset": 2048, 00:18:09.382 "data_size": 63488 00:18:09.382 }, 00:18:09.382 { 00:18:09.382 "name": "pt2", 00:18:09.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.382 "is_configured": true, 00:18:09.382 "data_offset": 2048, 00:18:09.382 "data_size": 63488 00:18:09.382 }, 00:18:09.382 { 00:18:09.382 "name": "pt3", 00:18:09.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.382 "is_configured": true, 00:18:09.382 "data_offset": 2048, 00:18:09.382 "data_size": 63488 00:18:09.382 }, 00:18:09.382 { 00:18:09.382 "name": null, 00:18:09.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:09.382 "is_configured": false, 00:18:09.382 "data_offset": 2048, 00:18:09.382 "data_size": 63488 00:18:09.382 } 00:18:09.382 ] 00:18:09.382 }' 00:18:09.382 10:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.382 10:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.704 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:18:09.704 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:09.704 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:18:09.704 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:09.962 [2024-07-15 10:25:34.599325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:09.962 [2024-07-15 10:25:34.599363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.962 [2024-07-15 10:25:34.599377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db0c20 00:18:09.962 [2024-07-15 10:25:34.599385] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.962 [2024-07-15 10:25:34.599610] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.962 [2024-07-15 10:25:34.599623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:09.962 [2024-07-15 10:25:34.599668] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:09.962 [2024-07-15 10:25:34.599681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:09.962 [2024-07-15 10:25:34.599753] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db1280 00:18:09.962 [2024-07-15 10:25:34.599760] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:09.962 [2024-07-15 10:25:34.599868] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db6580 00:18:09.962 [2024-07-15 10:25:34.599962] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db1280 00:18:09.962 [2024-07-15 10:25:34.599970] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db1280 00:18:09.962 [2024-07-15 10:25:34.600038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.962 pt4 00:18:09.962 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:09.962 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.963 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.221 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.221 "name": "raid_bdev1", 00:18:10.221 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:10.221 "strip_size_kb": 0, 00:18:10.221 "state": "online", 00:18:10.221 "raid_level": "raid1", 00:18:10.221 "superblock": true, 00:18:10.221 "num_base_bdevs": 4, 00:18:10.221 "num_base_bdevs_discovered": 3, 00:18:10.221 "num_base_bdevs_operational": 3, 00:18:10.221 "base_bdevs_list": [ 00:18:10.221 { 00:18:10.221 "name": null, 00:18:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.221 "is_configured": false, 00:18:10.221 "data_offset": 2048, 00:18:10.221 "data_size": 63488 00:18:10.221 }, 00:18:10.221 { 00:18:10.221 "name": "pt2", 00:18:10.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.221 "is_configured": true, 00:18:10.221 "data_offset": 2048, 00:18:10.221 "data_size": 63488 00:18:10.221 }, 00:18:10.221 { 00:18:10.221 "name": "pt3", 00:18:10.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.221 "is_configured": true, 00:18:10.221 "data_offset": 2048, 00:18:10.221 "data_size": 63488 00:18:10.221 }, 00:18:10.221 { 00:18:10.221 "name": "pt4", 00:18:10.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:10.221 "is_configured": true, 00:18:10.221 "data_offset": 2048, 00:18:10.221 "data_size": 63488 00:18:10.221 } 00:18:10.221 ] 00:18:10.221 }' 00:18:10.221 10:25:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.221 10:25:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.786 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:10.786 [2024-07-15 10:25:35.429451] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.786 [2024-07-15 10:25:35.429469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:10.786 [2024-07-15 10:25:35.429504] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.786 [2024-07-15 10:25:35.429551] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.786 [2024-07-15 10:25:35.429558] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db1280 name raid_bdev1, state offline 00:18:10.786 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.786 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:11.044 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:11.044 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:11.044 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:18:11.044 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:18:11.044 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:11.044 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:11.302 [2024-07-15 10:25:35.942753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:11.302 [2024-07-15 10:25:35.942785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.302 [2024-07-15 10:25:35.942797] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db6400 00:18:11.302 [2024-07-15 10:25:35.942805] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.302 [2024-07-15 10:25:35.943926] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.302 [2024-07-15 10:25:35.943947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:11.302 [2024-07-15 10:25:35.943988] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:11.302 [2024-07-15 10:25:35.944004] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:11.302 [2024-07-15 10:25:35.944070] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:11.302 [2024-07-15 10:25:35.944078] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:11.302 [2024-07-15 10:25:35.944087] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f539f0 name raid_bdev1, state configuring 00:18:11.302 [2024-07-15 10:25:35.944102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.302 [2024-07-15 10:25:35.944148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:11.302 pt1 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.302 10:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.560 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.560 "name": "raid_bdev1", 00:18:11.560 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:11.560 "strip_size_kb": 0, 00:18:11.560 "state": "configuring", 00:18:11.560 "raid_level": "raid1", 00:18:11.560 "superblock": true, 00:18:11.560 "num_base_bdevs": 4, 00:18:11.560 "num_base_bdevs_discovered": 2, 00:18:11.560 "num_base_bdevs_operational": 3, 00:18:11.560 "base_bdevs_list": [ 00:18:11.560 { 00:18:11.560 "name": null, 00:18:11.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.560 "is_configured": false, 00:18:11.560 "data_offset": 2048, 00:18:11.560 "data_size": 63488 00:18:11.560 }, 00:18:11.560 { 00:18:11.560 "name": "pt2", 00:18:11.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.560 "is_configured": true, 00:18:11.560 "data_offset": 2048, 00:18:11.560 "data_size": 63488 00:18:11.560 }, 00:18:11.560 { 00:18:11.560 "name": "pt3", 00:18:11.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.560 "is_configured": true, 00:18:11.560 "data_offset": 2048, 00:18:11.560 "data_size": 63488 00:18:11.560 }, 00:18:11.560 { 00:18:11.560 "name": null, 00:18:11.560 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:11.560 "is_configured": false, 00:18:11.560 "data_offset": 2048, 00:18:11.560 "data_size": 63488 00:18:11.560 } 00:18:11.560 ] 00:18:11.560 }' 00:18:11.560 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.560 10:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.818 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:18:12.074 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:12.074 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:18:12.074 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:12.331 [2024-07-15 10:25:36.937314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:12.331 [2024-07-15 10:25:36.937353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.331 [2024-07-15 10:25:36.937369] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1db3050 00:18:12.331 [2024-07-15 10:25:36.937378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.331 [2024-07-15 10:25:36.937641] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.331 [2024-07-15 10:25:36.937654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:12.331 [2024-07-15 10:25:36.937698] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:12.331 [2024-07-15 10:25:36.937711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:12.331 [2024-07-15 10:25:36.937789] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1db94c0 00:18:12.331 [2024-07-15 10:25:36.937796] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:12.331 [2024-07-15 10:25:36.937923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1db25a0 00:18:12.331 [2024-07-15 10:25:36.938018] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1db94c0 00:18:12.331 [2024-07-15 10:25:36.938025] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1db94c0 00:18:12.331 [2024-07-15 10:25:36.938092] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.331 pt4 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.331 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.332 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.332 10:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.589 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.589 "name": "raid_bdev1", 00:18:12.589 "uuid": "c33794a3-8735-488c-8385-003b118da85e", 00:18:12.589 "strip_size_kb": 0, 00:18:12.589 "state": "online", 00:18:12.589 "raid_level": "raid1", 00:18:12.589 "superblock": true, 00:18:12.589 "num_base_bdevs": 4, 00:18:12.589 "num_base_bdevs_discovered": 3, 00:18:12.589 "num_base_bdevs_operational": 3, 00:18:12.589 "base_bdevs_list": [ 00:18:12.589 { 00:18:12.589 "name": null, 00:18:12.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.589 "is_configured": false, 00:18:12.589 "data_offset": 2048, 00:18:12.589 "data_size": 63488 00:18:12.589 }, 00:18:12.589 { 00:18:12.589 "name": "pt2", 00:18:12.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.589 "is_configured": true, 00:18:12.589 "data_offset": 2048, 00:18:12.589 "data_size": 63488 00:18:12.589 }, 00:18:12.589 { 00:18:12.589 "name": "pt3", 00:18:12.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.589 "is_configured": true, 00:18:12.589 "data_offset": 2048, 00:18:12.589 "data_size": 63488 00:18:12.589 }, 00:18:12.589 { 00:18:12.589 "name": "pt4", 00:18:12.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:12.589 "is_configured": true, 00:18:12.589 "data_offset": 2048, 00:18:12.589 "data_size": 63488 00:18:12.589 } 00:18:12.589 ] 00:18:12.589 }' 00:18:12.589 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.589 10:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.847 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:12.847 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:13.104 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:13.104 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:13.104 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:13.363 [2024-07-15 10:25:37.960137] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' c33794a3-8735-488c-8385-003b118da85e '!=' c33794a3-8735-488c-8385-003b118da85e ']' 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 1842531 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 1842531 ']' 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 1842531 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.363 10:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1842531 00:18:13.363 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.363 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.363 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1842531' 00:18:13.363 killing process with pid 1842531 00:18:13.363 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 1842531 00:18:13.363 [2024-07-15 10:25:38.035801] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.363 [2024-07-15 10:25:38.035843] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.363 [2024-07-15 10:25:38.035889] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.363 [2024-07-15 10:25:38.035898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1db94c0 name raid_bdev1, state offline 00:18:13.363 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 1842531 00:18:13.363 [2024-07-15 10:25:38.065743] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:13.623 10:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:13.623 00:18:13.623 real 0m18.895s 00:18:13.623 user 0m34.281s 00:18:13.623 sys 0m3.749s 00:18:13.623 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:13.623 10:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.623 ************************************ 00:18:13.623 END TEST raid_superblock_test 00:18:13.623 ************************************ 00:18:13.623 10:25:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:13.623 10:25:38 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:18:13.623 10:25:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:13.623 10:25:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:13.623 10:25:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.623 ************************************ 00:18:13.623 START TEST raid_read_error_test 00:18:13.623 ************************************ 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.jYh5Mzo2Im 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1846243 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1846243 /var/tmp/spdk-raid.sock 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 1846243 ']' 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.623 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:13.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:13.624 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.624 10:25:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.624 [2024-07-15 10:25:38.365446] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:13.624 [2024-07-15 10:25:38.365492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846243 ] 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:13.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:13.883 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:13.883 [2024-07-15 10:25:38.456436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.883 [2024-07-15 10:25:38.528992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.883 [2024-07-15 10:25:38.578018] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.883 [2024-07-15 10:25:38.578042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:14.451 10:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.451 10:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:14.451 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:14.451 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:14.720 BaseBdev1_malloc 00:18:14.720 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:14.720 true 00:18:14.720 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:14.978 [2024-07-15 10:25:39.621925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:14.978 [2024-07-15 10:25:39.621966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.978 [2024-07-15 10:25:39.621981] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d19190 00:18:14.978 [2024-07-15 10:25:39.621989] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.978 [2024-07-15 10:25:39.623136] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.978 [2024-07-15 10:25:39.623160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:14.978 BaseBdev1 00:18:14.978 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:14.978 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:15.236 BaseBdev2_malloc 00:18:15.236 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:15.236 true 00:18:15.236 10:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:15.496 [2024-07-15 10:25:40.114684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:15.496 [2024-07-15 10:25:40.114723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.496 [2024-07-15 10:25:40.114739] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1de20 00:18:15.496 [2024-07-15 10:25:40.114747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.496 [2024-07-15 10:25:40.115849] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.496 [2024-07-15 10:25:40.115872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:15.496 BaseBdev2 00:18:15.496 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:15.496 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:15.754 BaseBdev3_malloc 00:18:15.754 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:15.754 true 00:18:15.754 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:16.013 [2024-07-15 10:25:40.636025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:16.013 [2024-07-15 10:25:40.636057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.013 [2024-07-15 10:25:40.636074] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d1ed90 00:18:16.013 [2024-07-15 10:25:40.636082] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.013 [2024-07-15 10:25:40.637116] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.013 [2024-07-15 10:25:40.637139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:16.013 BaseBdev3 00:18:16.013 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:16.013 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:16.270 BaseBdev4_malloc 00:18:16.270 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:16.270 true 00:18:16.270 10:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:16.529 [2024-07-15 10:25:41.149037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:16.529 [2024-07-15 10:25:41.149070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.529 [2024-07-15 10:25:41.149085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d21000 00:18:16.529 [2024-07-15 10:25:41.149094] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.529 [2024-07-15 10:25:41.150164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.529 [2024-07-15 10:25:41.150184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:16.529 BaseBdev4 00:18:16.529 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:16.529 [2024-07-15 10:25:41.305464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.529 [2024-07-15 10:25:41.306339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.529 [2024-07-15 10:25:41.306385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.529 [2024-07-15 10:25:41.306420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:16.529 [2024-07-15 10:25:41.306569] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d21dd0 00:18:16.529 [2024-07-15 10:25:41.306576] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:16.529 [2024-07-15 10:25:41.306703] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d23080 00:18:16.529 [2024-07-15 10:25:41.306803] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d21dd0 00:18:16.529 [2024-07-15 10:25:41.306809] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d21dd0 00:18:16.529 [2024-07-15 10:25:41.306876] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.788 "name": "raid_bdev1", 00:18:16.788 "uuid": "3a1060b9-8fb0-458d-a753-58d89da8ff0f", 00:18:16.788 "strip_size_kb": 0, 00:18:16.788 "state": "online", 00:18:16.788 "raid_level": "raid1", 00:18:16.788 "superblock": true, 00:18:16.788 "num_base_bdevs": 4, 00:18:16.788 "num_base_bdevs_discovered": 4, 00:18:16.788 "num_base_bdevs_operational": 4, 00:18:16.788 "base_bdevs_list": [ 00:18:16.788 { 00:18:16.788 "name": "BaseBdev1", 00:18:16.788 "uuid": "c25323ce-fc96-5d1e-bcbe-4b0d54e5da14", 00:18:16.788 "is_configured": true, 00:18:16.788 "data_offset": 2048, 00:18:16.788 "data_size": 63488 00:18:16.788 }, 00:18:16.788 { 00:18:16.788 "name": "BaseBdev2", 00:18:16.788 "uuid": "510b684c-c586-5972-9729-6339bd76e461", 00:18:16.788 "is_configured": true, 00:18:16.788 "data_offset": 2048, 00:18:16.788 "data_size": 63488 00:18:16.788 }, 00:18:16.788 { 00:18:16.788 "name": "BaseBdev3", 00:18:16.788 "uuid": "cbfa02cc-67b1-5189-891b-46fc652fe691", 00:18:16.788 "is_configured": true, 00:18:16.788 "data_offset": 2048, 00:18:16.788 "data_size": 63488 00:18:16.788 }, 00:18:16.788 { 00:18:16.788 "name": "BaseBdev4", 00:18:16.788 "uuid": "d6e799a4-9686-58b1-bad9-8a71cc769207", 00:18:16.788 "is_configured": true, 00:18:16.788 "data_offset": 2048, 00:18:16.788 "data_size": 63488 00:18:16.788 } 00:18:16.788 ] 00:18:16.788 }' 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.788 10:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:17.354 10:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:17.354 [2024-07-15 10:25:42.051584] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d23080 00:18:18.322 10:25:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.588 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.588 "name": "raid_bdev1", 00:18:18.588 "uuid": "3a1060b9-8fb0-458d-a753-58d89da8ff0f", 00:18:18.588 "strip_size_kb": 0, 00:18:18.588 "state": "online", 00:18:18.588 "raid_level": "raid1", 00:18:18.588 "superblock": true, 00:18:18.588 "num_base_bdevs": 4, 00:18:18.588 "num_base_bdevs_discovered": 4, 00:18:18.589 "num_base_bdevs_operational": 4, 00:18:18.589 "base_bdevs_list": [ 00:18:18.589 { 00:18:18.589 "name": "BaseBdev1", 00:18:18.589 "uuid": "c25323ce-fc96-5d1e-bcbe-4b0d54e5da14", 00:18:18.589 "is_configured": true, 00:18:18.589 "data_offset": 2048, 00:18:18.589 "data_size": 63488 00:18:18.589 }, 00:18:18.589 { 00:18:18.589 "name": "BaseBdev2", 00:18:18.589 "uuid": "510b684c-c586-5972-9729-6339bd76e461", 00:18:18.589 "is_configured": true, 00:18:18.589 "data_offset": 2048, 00:18:18.589 "data_size": 63488 00:18:18.589 }, 00:18:18.589 { 00:18:18.589 "name": "BaseBdev3", 00:18:18.589 "uuid": "cbfa02cc-67b1-5189-891b-46fc652fe691", 00:18:18.589 "is_configured": true, 00:18:18.589 "data_offset": 2048, 00:18:18.589 "data_size": 63488 00:18:18.589 }, 00:18:18.589 { 00:18:18.589 "name": "BaseBdev4", 00:18:18.589 "uuid": "d6e799a4-9686-58b1-bad9-8a71cc769207", 00:18:18.589 "is_configured": true, 00:18:18.589 "data_offset": 2048, 00:18:18.589 "data_size": 63488 00:18:18.589 } 00:18:18.589 ] 00:18:18.589 }' 00:18:18.589 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.589 10:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.160 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:19.160 [2024-07-15 10:25:43.938234] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.160 [2024-07-15 10:25:43.938266] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.160 [2024-07-15 10:25:43.940294] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.160 [2024-07-15 10:25:43.940322] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.160 [2024-07-15 10:25:43.940394] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.160 [2024-07-15 10:25:43.940401] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d21dd0 name raid_bdev1, state offline 00:18:19.160 0 00:18:19.420 10:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1846243 00:18:19.420 10:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 1846243 ']' 00:18:19.420 10:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 1846243 00:18:19.420 10:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:19.420 10:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.420 10:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1846243 00:18:19.420 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:19.420 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:19.420 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1846243' 00:18:19.420 killing process with pid 1846243 00:18:19.420 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 1846243 00:18:19.420 [2024-07-15 10:25:44.010214] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.420 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 1846243 00:18:19.420 [2024-07-15 10:25:44.035658] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.jYh5Mzo2Im 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:19.680 00:18:19.680 real 0m5.925s 00:18:19.680 user 0m9.119s 00:18:19.680 sys 0m1.083s 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.680 10:25:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 ************************************ 00:18:19.680 END TEST raid_read_error_test 00:18:19.680 ************************************ 00:18:19.680 10:25:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:19.680 10:25:44 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:18:19.680 10:25:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:19.680 10:25:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.680 10:25:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 ************************************ 00:18:19.680 START TEST raid_write_error_test 00:18:19.680 ************************************ 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.hiQK7rDJsq 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=1847313 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 1847313 /var/tmp/spdk-raid.sock 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 1847313 ']' 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.680 10:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 [2024-07-15 10:25:44.356859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:19.680 [2024-07-15 10:25:44.356909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847313 ] 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:19.680 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:19.680 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:19.680 [2024-07-15 10:25:44.448424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.938 [2024-07-15 10:25:44.523549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.938 [2024-07-15 10:25:44.578182] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.938 [2024-07-15 10:25:44.578211] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.507 10:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.507 10:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:20.507 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:20.507 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:20.765 BaseBdev1_malloc 00:18:20.765 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:20.765 true 00:18:20.765 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:21.024 [2024-07-15 10:25:45.650697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:21.024 [2024-07-15 10:25:45.650730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.024 [2024-07-15 10:25:45.650745] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf3c190 00:18:21.024 [2024-07-15 10:25:45.650754] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.024 [2024-07-15 10:25:45.651986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.024 [2024-07-15 10:25:45.652008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:21.024 BaseBdev1 00:18:21.024 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:21.024 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:21.283 BaseBdev2_malloc 00:18:21.283 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:21.283 true 00:18:21.283 10:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:21.542 [2024-07-15 10:25:46.147707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:21.542 [2024-07-15 10:25:46.147739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.542 [2024-07-15 10:25:46.147753] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf40e20 00:18:21.542 [2024-07-15 10:25:46.147765] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.542 [2024-07-15 10:25:46.148835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.542 [2024-07-15 10:25:46.148857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:21.542 BaseBdev2 00:18:21.542 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:21.542 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:21.542 BaseBdev3_malloc 00:18:21.542 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:21.800 true 00:18:21.800 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:22.059 [2024-07-15 10:25:46.648885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:22.059 [2024-07-15 10:25:46.648923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.059 [2024-07-15 10:25:46.648939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf41d90 00:18:22.059 [2024-07-15 10:25:46.648962] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.059 [2024-07-15 10:25:46.649999] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.059 [2024-07-15 10:25:46.650022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:22.059 BaseBdev3 00:18:22.059 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:22.059 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:22.059 BaseBdev4_malloc 00:18:22.059 10:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:18:22.369 true 00:18:22.369 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:18:22.369 [2024-07-15 10:25:47.157668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:18:22.369 [2024-07-15 10:25:47.157702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.369 [2024-07-15 10:25:47.157716] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf44000 00:18:22.369 [2024-07-15 10:25:47.157724] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.642 [2024-07-15 10:25:47.158779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.642 [2024-07-15 10:25:47.158800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:22.642 BaseBdev4 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:18:22.642 [2024-07-15 10:25:47.326175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.642 [2024-07-15 10:25:47.327061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.642 [2024-07-15 10:25:47.327106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:22.642 [2024-07-15 10:25:47.327142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:22.642 [2024-07-15 10:25:47.327289] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf44dd0 00:18:22.642 [2024-07-15 10:25:47.327296] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:22.642 [2024-07-15 10:25:47.327420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf46080 00:18:22.642 [2024-07-15 10:25:47.327522] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf44dd0 00:18:22.642 [2024-07-15 10:25:47.327529] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf44dd0 00:18:22.642 [2024-07-15 10:25:47.327593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.642 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.900 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.900 "name": "raid_bdev1", 00:18:22.900 "uuid": "f4779ef7-7415-409d-8bfa-355c99d2777b", 00:18:22.900 "strip_size_kb": 0, 00:18:22.900 "state": "online", 00:18:22.900 "raid_level": "raid1", 00:18:22.900 "superblock": true, 00:18:22.900 "num_base_bdevs": 4, 00:18:22.900 "num_base_bdevs_discovered": 4, 00:18:22.900 "num_base_bdevs_operational": 4, 00:18:22.900 "base_bdevs_list": [ 00:18:22.900 { 00:18:22.900 "name": "BaseBdev1", 00:18:22.900 "uuid": "0cc75372-d8d5-5782-8b8f-d8c949f57713", 00:18:22.900 "is_configured": true, 00:18:22.900 "data_offset": 2048, 00:18:22.900 "data_size": 63488 00:18:22.900 }, 00:18:22.900 { 00:18:22.900 "name": "BaseBdev2", 00:18:22.900 "uuid": "b8ac3983-eb8a-56da-aa81-2d8811c6dc06", 00:18:22.900 "is_configured": true, 00:18:22.900 "data_offset": 2048, 00:18:22.900 "data_size": 63488 00:18:22.900 }, 00:18:22.900 { 00:18:22.900 "name": "BaseBdev3", 00:18:22.900 "uuid": "e3062bda-1168-5cdb-922c-08916d47a816", 00:18:22.900 "is_configured": true, 00:18:22.900 "data_offset": 2048, 00:18:22.900 "data_size": 63488 00:18:22.900 }, 00:18:22.900 { 00:18:22.900 "name": "BaseBdev4", 00:18:22.900 "uuid": "5f929f27-dd01-56c0-a40c-5e01a68a324a", 00:18:22.900 "is_configured": true, 00:18:22.900 "data_offset": 2048, 00:18:22.900 "data_size": 63488 00:18:22.900 } 00:18:22.900 ] 00:18:22.900 }' 00:18:22.900 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.900 10:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.177 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:23.177 10:25:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:23.436 [2024-07-15 10:25:48.036198] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf46080 00:18:24.372 10:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:24.372 [2024-07-15 10:25:49.114922] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:18:24.372 [2024-07-15 10:25:49.114967] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.372 [2024-07-15 10:25:49.115147] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf46080 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.372 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.631 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.631 "name": "raid_bdev1", 00:18:24.631 "uuid": "f4779ef7-7415-409d-8bfa-355c99d2777b", 00:18:24.631 "strip_size_kb": 0, 00:18:24.631 "state": "online", 00:18:24.631 "raid_level": "raid1", 00:18:24.631 "superblock": true, 00:18:24.631 "num_base_bdevs": 4, 00:18:24.631 "num_base_bdevs_discovered": 3, 00:18:24.631 "num_base_bdevs_operational": 3, 00:18:24.631 "base_bdevs_list": [ 00:18:24.631 { 00:18:24.631 "name": null, 00:18:24.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.631 "is_configured": false, 00:18:24.631 "data_offset": 2048, 00:18:24.631 "data_size": 63488 00:18:24.631 }, 00:18:24.631 { 00:18:24.631 "name": "BaseBdev2", 00:18:24.631 "uuid": "b8ac3983-eb8a-56da-aa81-2d8811c6dc06", 00:18:24.631 "is_configured": true, 00:18:24.631 "data_offset": 2048, 00:18:24.631 "data_size": 63488 00:18:24.631 }, 00:18:24.631 { 00:18:24.631 "name": "BaseBdev3", 00:18:24.631 "uuid": "e3062bda-1168-5cdb-922c-08916d47a816", 00:18:24.631 "is_configured": true, 00:18:24.631 "data_offset": 2048, 00:18:24.631 "data_size": 63488 00:18:24.631 }, 00:18:24.631 { 00:18:24.631 "name": "BaseBdev4", 00:18:24.631 "uuid": "5f929f27-dd01-56c0-a40c-5e01a68a324a", 00:18:24.631 "is_configured": true, 00:18:24.631 "data_offset": 2048, 00:18:24.631 "data_size": 63488 00:18:24.631 } 00:18:24.631 ] 00:18:24.631 }' 00:18:24.631 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.631 10:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.198 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:25.198 [2024-07-15 10:25:49.959599] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.198 [2024-07-15 10:25:49.959634] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.198 [2024-07-15 10:25:49.961550] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.198 [2024-07-15 10:25:49.961576] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.198 [2024-07-15 10:25:49.961636] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.198 [2024-07-15 10:25:49.961643] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf44dd0 name raid_bdev1, state offline 00:18:25.198 0 00:18:25.198 10:25:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 1847313 00:18:25.198 10:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 1847313 ']' 00:18:25.198 10:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 1847313 00:18:25.198 10:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:25.198 10:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.480 10:25:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1847313 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1847313' 00:18:25.480 killing process with pid 1847313 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 1847313 00:18:25.480 [2024-07-15 10:25:50.035458] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 1847313 00:18:25.480 [2024-07-15 10:25:50.061309] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.hiQK7rDJsq 00:18:25.480 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:18:25.481 00:18:25.481 real 0m5.945s 00:18:25.481 user 0m9.158s 00:18:25.481 sys 0m1.060s 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.481 10:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.481 ************************************ 00:18:25.481 END TEST raid_write_error_test 00:18:25.481 ************************************ 00:18:25.740 10:25:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:25.740 10:25:50 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:18:25.740 10:25:50 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:18:25.740 10:25:50 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:18:25.740 10:25:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:25.740 10:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.740 10:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.740 ************************************ 00:18:25.740 START TEST raid_rebuild_test 00:18:25.740 ************************************ 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=1848352 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 1848352 /var/tmp/spdk-raid.sock 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 1848352 ']' 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:25.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.740 10:25:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.740 [2024-07-15 10:25:50.399635] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:25.740 [2024-07-15 10:25:50.399683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848352 ] 00:18:25.740 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:25.740 Zero copy mechanism will not be used. 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:25.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.740 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:25.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:25.741 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:25.741 [2024-07-15 10:25:50.491151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.999 [2024-07-15 10:25:50.561940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.999 [2024-07-15 10:25:50.619014] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.999 [2024-07-15 10:25:50.619040] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.566 10:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.566 10:25:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:18:26.566 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:26.566 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:26.566 BaseBdev1_malloc 00:18:26.566 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:26.824 [2024-07-15 10:25:51.502629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:26.824 [2024-07-15 10:25:51.502667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.824 [2024-07-15 10:25:51.502681] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10da5f0 00:18:26.824 [2024-07-15 10:25:51.502705] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.824 [2024-07-15 10:25:51.503812] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.824 [2024-07-15 10:25:51.503835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:26.824 BaseBdev1 00:18:26.824 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:26.824 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:27.083 BaseBdev2_malloc 00:18:27.083 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:27.083 [2024-07-15 10:25:51.834982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:27.083 [2024-07-15 10:25:51.835013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.083 [2024-07-15 10:25:51.835026] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x127e130 00:18:27.083 [2024-07-15 10:25:51.835035] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.083 [2024-07-15 10:25:51.836033] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.083 [2024-07-15 10:25:51.836053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:27.083 BaseBdev2 00:18:27.083 10:25:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:27.341 spare_malloc 00:18:27.341 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:27.599 spare_delay 00:18:27.599 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:27.599 [2024-07-15 10:25:52.363700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:27.599 [2024-07-15 10:25:52.363731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.599 [2024-07-15 10:25:52.363744] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x127d770 00:18:27.599 [2024-07-15 10:25:52.363753] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.599 [2024-07-15 10:25:52.364705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.599 [2024-07-15 10:25:52.364726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:27.599 spare 00:18:27.599 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:27.857 [2024-07-15 10:25:52.536152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.857 [2024-07-15 10:25:52.536917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.857 [2024-07-15 10:25:52.536981] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10d2270 00:18:27.857 [2024-07-15 10:25:52.536988] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:27.857 [2024-07-15 10:25:52.537108] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x127e3c0 00:18:27.857 [2024-07-15 10:25:52.537199] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10d2270 00:18:27.857 [2024-07-15 10:25:52.537206] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10d2270 00:18:27.857 [2024-07-15 10:25:52.537275] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.857 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.114 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.114 "name": "raid_bdev1", 00:18:28.114 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:28.114 "strip_size_kb": 0, 00:18:28.114 "state": "online", 00:18:28.114 "raid_level": "raid1", 00:18:28.114 "superblock": false, 00:18:28.114 "num_base_bdevs": 2, 00:18:28.114 "num_base_bdevs_discovered": 2, 00:18:28.114 "num_base_bdevs_operational": 2, 00:18:28.114 "base_bdevs_list": [ 00:18:28.114 { 00:18:28.114 "name": "BaseBdev1", 00:18:28.115 "uuid": "cbd10328-6b8e-5c87-9042-21d91e9dba18", 00:18:28.115 "is_configured": true, 00:18:28.115 "data_offset": 0, 00:18:28.115 "data_size": 65536 00:18:28.115 }, 00:18:28.115 { 00:18:28.115 "name": "BaseBdev2", 00:18:28.115 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:28.115 "is_configured": true, 00:18:28.115 "data_offset": 0, 00:18:28.115 "data_size": 65536 00:18:28.115 } 00:18:28.115 ] 00:18:28.115 }' 00:18:28.115 10:25:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.115 10:25:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.680 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:28.680 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:28.680 [2024-07-15 10:25:53.370441] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.680 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:18:28.680 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.680 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:28.939 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:28.940 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:28.940 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:28.940 [2024-07-15 10:25:53.719225] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x127e3c0 00:18:28.940 /dev/nbd0 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.199 1+0 records in 00:18:29.199 1+0 records out 00:18:29.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216454 s, 18.9 MB/s 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:29.199 10:25:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:33.388 65536+0 records in 00:18:33.388 65536+0 records out 00:18:33.388 33554432 bytes (34 MB, 32 MiB) copied, 3.8488 s, 8.7 MB/s 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:33.388 [2024-07-15 10:25:57.811359] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.388 10:25:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:33.388 [2024-07-15 10:25:57.984029] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.388 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.389 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.647 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.647 "name": "raid_bdev1", 00:18:33.647 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:33.647 "strip_size_kb": 0, 00:18:33.647 "state": "online", 00:18:33.647 "raid_level": "raid1", 00:18:33.647 "superblock": false, 00:18:33.647 "num_base_bdevs": 2, 00:18:33.647 "num_base_bdevs_discovered": 1, 00:18:33.647 "num_base_bdevs_operational": 1, 00:18:33.647 "base_bdevs_list": [ 00:18:33.647 { 00:18:33.647 "name": null, 00:18:33.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.647 "is_configured": false, 00:18:33.647 "data_offset": 0, 00:18:33.647 "data_size": 65536 00:18:33.647 }, 00:18:33.647 { 00:18:33.647 "name": "BaseBdev2", 00:18:33.647 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:33.647 "is_configured": true, 00:18:33.647 "data_offset": 0, 00:18:33.647 "data_size": 65536 00:18:33.647 } 00:18:33.647 ] 00:18:33.647 }' 00:18:33.647 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.647 10:25:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.905 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.164 [2024-07-15 10:25:58.826220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.164 [2024-07-15 10:25:58.830538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x127e3c0 00:18:34.164 [2024-07-15 10:25:58.832105] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.164 10:25:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.098 10:25:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.357 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.357 "name": "raid_bdev1", 00:18:35.357 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:35.357 "strip_size_kb": 0, 00:18:35.357 "state": "online", 00:18:35.357 "raid_level": "raid1", 00:18:35.357 "superblock": false, 00:18:35.357 "num_base_bdevs": 2, 00:18:35.357 "num_base_bdevs_discovered": 2, 00:18:35.357 "num_base_bdevs_operational": 2, 00:18:35.357 "process": { 00:18:35.357 "type": "rebuild", 00:18:35.357 "target": "spare", 00:18:35.357 "progress": { 00:18:35.357 "blocks": 22528, 00:18:35.357 "percent": 34 00:18:35.357 } 00:18:35.357 }, 00:18:35.357 "base_bdevs_list": [ 00:18:35.357 { 00:18:35.357 "name": "spare", 00:18:35.357 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:35.357 "is_configured": true, 00:18:35.357 "data_offset": 0, 00:18:35.357 "data_size": 65536 00:18:35.357 }, 00:18:35.357 { 00:18:35.357 "name": "BaseBdev2", 00:18:35.357 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:35.357 "is_configured": true, 00:18:35.357 "data_offset": 0, 00:18:35.357 "data_size": 65536 00:18:35.357 } 00:18:35.357 ] 00:18:35.357 }' 00:18:35.357 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:35.357 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.357 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:35.357 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.357 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:35.615 [2024-07-15 10:26:00.262698] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.615 [2024-07-15 10:26:00.342589] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:35.615 [2024-07-15 10:26:00.342634] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.615 [2024-07-15 10:26:00.342643] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:35.615 [2024-07-15 10:26:00.342665] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.615 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.872 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:35.872 "name": "raid_bdev1", 00:18:35.872 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:35.872 "strip_size_kb": 0, 00:18:35.872 "state": "online", 00:18:35.872 "raid_level": "raid1", 00:18:35.872 "superblock": false, 00:18:35.872 "num_base_bdevs": 2, 00:18:35.872 "num_base_bdevs_discovered": 1, 00:18:35.872 "num_base_bdevs_operational": 1, 00:18:35.872 "base_bdevs_list": [ 00:18:35.872 { 00:18:35.872 "name": null, 00:18:35.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.872 "is_configured": false, 00:18:35.872 "data_offset": 0, 00:18:35.872 "data_size": 65536 00:18:35.872 }, 00:18:35.872 { 00:18:35.872 "name": "BaseBdev2", 00:18:35.872 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:35.872 "is_configured": true, 00:18:35.872 "data_offset": 0, 00:18:35.872 "data_size": 65536 00:18:35.872 } 00:18:35.872 ] 00:18:35.872 }' 00:18:35.872 10:26:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:35.872 10:26:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.439 "name": "raid_bdev1", 00:18:36.439 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:36.439 "strip_size_kb": 0, 00:18:36.439 "state": "online", 00:18:36.439 "raid_level": "raid1", 00:18:36.439 "superblock": false, 00:18:36.439 "num_base_bdevs": 2, 00:18:36.439 "num_base_bdevs_discovered": 1, 00:18:36.439 "num_base_bdevs_operational": 1, 00:18:36.439 "base_bdevs_list": [ 00:18:36.439 { 00:18:36.439 "name": null, 00:18:36.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.439 "is_configured": false, 00:18:36.439 "data_offset": 0, 00:18:36.439 "data_size": 65536 00:18:36.439 }, 00:18:36.439 { 00:18:36.439 "name": "BaseBdev2", 00:18:36.439 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:36.439 "is_configured": true, 00:18:36.439 "data_offset": 0, 00:18:36.439 "data_size": 65536 00:18:36.439 } 00:18:36.439 ] 00:18:36.439 }' 00:18:36.439 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:36.696 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:36.696 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:36.696 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:36.696 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:36.696 [2024-07-15 10:26:01.449495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.696 [2024-07-15 10:26:01.453949] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1272290 00:18:36.696 [2024-07-15 10:26:01.455020] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.696 10:26:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.088 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.088 "name": "raid_bdev1", 00:18:38.088 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:38.088 "strip_size_kb": 0, 00:18:38.088 "state": "online", 00:18:38.088 "raid_level": "raid1", 00:18:38.088 "superblock": false, 00:18:38.088 "num_base_bdevs": 2, 00:18:38.088 "num_base_bdevs_discovered": 2, 00:18:38.088 "num_base_bdevs_operational": 2, 00:18:38.088 "process": { 00:18:38.088 "type": "rebuild", 00:18:38.088 "target": "spare", 00:18:38.088 "progress": { 00:18:38.088 "blocks": 22528, 00:18:38.088 "percent": 34 00:18:38.088 } 00:18:38.088 }, 00:18:38.088 "base_bdevs_list": [ 00:18:38.088 { 00:18:38.088 "name": "spare", 00:18:38.088 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:38.088 "is_configured": true, 00:18:38.088 "data_offset": 0, 00:18:38.088 "data_size": 65536 00:18:38.088 }, 00:18:38.088 { 00:18:38.088 "name": "BaseBdev2", 00:18:38.088 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:38.088 "is_configured": true, 00:18:38.088 "data_offset": 0, 00:18:38.088 "data_size": 65536 00:18:38.088 } 00:18:38.089 ] 00:18:38.089 }' 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=583 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.089 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.363 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:38.363 "name": "raid_bdev1", 00:18:38.363 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:38.363 "strip_size_kb": 0, 00:18:38.363 "state": "online", 00:18:38.363 "raid_level": "raid1", 00:18:38.363 "superblock": false, 00:18:38.363 "num_base_bdevs": 2, 00:18:38.363 "num_base_bdevs_discovered": 2, 00:18:38.363 "num_base_bdevs_operational": 2, 00:18:38.363 "process": { 00:18:38.363 "type": "rebuild", 00:18:38.363 "target": "spare", 00:18:38.363 "progress": { 00:18:38.363 "blocks": 28672, 00:18:38.363 "percent": 43 00:18:38.363 } 00:18:38.363 }, 00:18:38.363 "base_bdevs_list": [ 00:18:38.363 { 00:18:38.363 "name": "spare", 00:18:38.363 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:38.363 "is_configured": true, 00:18:38.363 "data_offset": 0, 00:18:38.363 "data_size": 65536 00:18:38.363 }, 00:18:38.363 { 00:18:38.363 "name": "BaseBdev2", 00:18:38.363 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:38.363 "is_configured": true, 00:18:38.363 "data_offset": 0, 00:18:38.363 "data_size": 65536 00:18:38.363 } 00:18:38.363 ] 00:18:38.363 }' 00:18:38.363 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:38.363 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.363 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:38.363 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.363 10:26:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.294 10:26:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.551 10:26:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.551 "name": "raid_bdev1", 00:18:39.551 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:39.551 "strip_size_kb": 0, 00:18:39.551 "state": "online", 00:18:39.551 "raid_level": "raid1", 00:18:39.551 "superblock": false, 00:18:39.551 "num_base_bdevs": 2, 00:18:39.551 "num_base_bdevs_discovered": 2, 00:18:39.551 "num_base_bdevs_operational": 2, 00:18:39.551 "process": { 00:18:39.551 "type": "rebuild", 00:18:39.551 "target": "spare", 00:18:39.551 "progress": { 00:18:39.552 "blocks": 53248, 00:18:39.552 "percent": 81 00:18:39.552 } 00:18:39.552 }, 00:18:39.552 "base_bdevs_list": [ 00:18:39.552 { 00:18:39.552 "name": "spare", 00:18:39.552 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:39.552 "is_configured": true, 00:18:39.552 "data_offset": 0, 00:18:39.552 "data_size": 65536 00:18:39.552 }, 00:18:39.552 { 00:18:39.552 "name": "BaseBdev2", 00:18:39.552 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:39.552 "is_configured": true, 00:18:39.552 "data_offset": 0, 00:18:39.552 "data_size": 65536 00:18:39.552 } 00:18:39.552 ] 00:18:39.552 }' 00:18:39.552 10:26:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:39.552 10:26:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.552 10:26:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:39.552 10:26:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.552 10:26:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:40.118 [2024-07-15 10:26:04.676968] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:40.118 [2024-07-15 10:26:04.677010] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:40.118 [2024-07-15 10:26:04.677039] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.683 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.683 "name": "raid_bdev1", 00:18:40.683 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:40.683 "strip_size_kb": 0, 00:18:40.683 "state": "online", 00:18:40.683 "raid_level": "raid1", 00:18:40.683 "superblock": false, 00:18:40.683 "num_base_bdevs": 2, 00:18:40.683 "num_base_bdevs_discovered": 2, 00:18:40.683 "num_base_bdevs_operational": 2, 00:18:40.683 "base_bdevs_list": [ 00:18:40.683 { 00:18:40.683 "name": "spare", 00:18:40.683 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:40.684 "is_configured": true, 00:18:40.684 "data_offset": 0, 00:18:40.684 "data_size": 65536 00:18:40.684 }, 00:18:40.684 { 00:18:40.684 "name": "BaseBdev2", 00:18:40.684 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:40.684 "is_configured": true, 00:18:40.684 "data_offset": 0, 00:18:40.684 "data_size": 65536 00:18:40.684 } 00:18:40.684 ] 00:18:40.684 }' 00:18:40.684 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:40.684 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:40.684 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:40.942 "name": "raid_bdev1", 00:18:40.942 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:40.942 "strip_size_kb": 0, 00:18:40.942 "state": "online", 00:18:40.942 "raid_level": "raid1", 00:18:40.942 "superblock": false, 00:18:40.942 "num_base_bdevs": 2, 00:18:40.942 "num_base_bdevs_discovered": 2, 00:18:40.942 "num_base_bdevs_operational": 2, 00:18:40.942 "base_bdevs_list": [ 00:18:40.942 { 00:18:40.942 "name": "spare", 00:18:40.942 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:40.942 "is_configured": true, 00:18:40.942 "data_offset": 0, 00:18:40.942 "data_size": 65536 00:18:40.942 }, 00:18:40.942 { 00:18:40.942 "name": "BaseBdev2", 00:18:40.942 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:40.942 "is_configured": true, 00:18:40.942 "data_offset": 0, 00:18:40.942 "data_size": 65536 00:18:40.942 } 00:18:40.942 ] 00:18:40.942 }' 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.942 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.200 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.200 "name": "raid_bdev1", 00:18:41.200 "uuid": "4d8a5ea6-740a-42df-ae81-53881562db88", 00:18:41.200 "strip_size_kb": 0, 00:18:41.200 "state": "online", 00:18:41.200 "raid_level": "raid1", 00:18:41.200 "superblock": false, 00:18:41.200 "num_base_bdevs": 2, 00:18:41.200 "num_base_bdevs_discovered": 2, 00:18:41.200 "num_base_bdevs_operational": 2, 00:18:41.200 "base_bdevs_list": [ 00:18:41.200 { 00:18:41.200 "name": "spare", 00:18:41.200 "uuid": "0453e5b6-d4df-5c35-b0df-4cc88b665aa3", 00:18:41.200 "is_configured": true, 00:18:41.200 "data_offset": 0, 00:18:41.200 "data_size": 65536 00:18:41.200 }, 00:18:41.200 { 00:18:41.200 "name": "BaseBdev2", 00:18:41.200 "uuid": "5cafa7be-e5fa-517d-9212-dacca0600608", 00:18:41.200 "is_configured": true, 00:18:41.200 "data_offset": 0, 00:18:41.200 "data_size": 65536 00:18:41.201 } 00:18:41.201 ] 00:18:41.201 }' 00:18:41.201 10:26:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.201 10:26:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.767 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:41.767 [2024-07-15 10:26:06.541507] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.767 [2024-07-15 10:26:06.541531] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.767 [2024-07-15 10:26:06.541574] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.767 [2024-07-15 10:26:06.541613] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.767 [2024-07-15 10:26:06.541620] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10d2270 name raid_bdev1, state offline 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.025 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:42.283 /dev/nbd0 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.283 1+0 records in 00:18:42.283 1+0 records out 00:18:42.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000118105 s, 34.7 MB/s 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.283 10:26:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:42.542 /dev/nbd1 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.542 1+0 records in 00:18:42.542 1+0 records out 00:18:42.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316306 s, 12.9 MB/s 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.542 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:42.800 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:42.800 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:42.800 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.801 10:26:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 1848352 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 1848352 ']' 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 1848352 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1848352 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1848352' 00:18:43.059 killing process with pid 1848352 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 1848352 00:18:43.059 Received shutdown signal, test time was about 60.000000 seconds 00:18:43.059 00:18:43.059 Latency(us) 00:18:43.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.059 =================================================================================================================== 00:18:43.059 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.059 [2024-07-15 10:26:07.643590] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 1848352 00:18:43.059 [2024-07-15 10:26:07.667108] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:18:43.059 00:18:43.059 real 0m17.499s 00:18:43.059 user 0m22.724s 00:18:43.059 sys 0m4.118s 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:43.059 10:26:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.059 ************************************ 00:18:43.059 END TEST raid_rebuild_test 00:18:43.059 ************************************ 00:18:43.318 10:26:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:43.318 10:26:07 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:43.318 10:26:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:43.318 10:26:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.318 10:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.318 ************************************ 00:18:43.318 START TEST raid_rebuild_test_sb 00:18:43.318 ************************************ 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=1851654 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 1851654 /var/tmp/spdk-raid.sock 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1851654 ']' 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:43.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.318 10:26:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.318 [2024-07-15 10:26:07.990366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:43.318 [2024-07-15 10:26:07.990413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851654 ] 00:18:43.318 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:43.318 Zero copy mechanism will not be used. 00:18:43.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.318 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:43.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.318 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:43.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.318 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:43.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.318 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:43.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.318 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:43.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:43.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:43.319 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:43.319 [2024-07-15 10:26:08.083563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.577 [2024-07-15 10:26:08.157156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.577 [2024-07-15 10:26:08.207744] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.577 [2024-07-15 10:26:08.207771] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:44.144 10:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.144 10:26:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:44.144 10:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:44.144 10:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:44.403 BaseBdev1_malloc 00:18:44.403 10:26:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:44.403 [2024-07-15 10:26:09.103406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:44.403 [2024-07-15 10:26:09.103444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.403 [2024-07-15 10:26:09.103461] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17a55f0 00:18:44.403 [2024-07-15 10:26:09.103486] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.403 [2024-07-15 10:26:09.104565] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.403 [2024-07-15 10:26:09.104587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.403 BaseBdev1 00:18:44.403 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:44.403 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:44.660 BaseBdev2_malloc 00:18:44.660 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:44.660 [2024-07-15 10:26:09.439971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:44.660 [2024-07-15 10:26:09.440005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.660 [2024-07-15 10:26:09.440021] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1949130 00:18:44.660 [2024-07-15 10:26:09.440030] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.660 [2024-07-15 10:26:09.441039] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.660 [2024-07-15 10:26:09.441065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:44.660 BaseBdev2 00:18:44.918 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:44.918 spare_malloc 00:18:44.918 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:45.174 spare_delay 00:18:45.174 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:45.174 [2024-07-15 10:26:09.960725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.174 [2024-07-15 10:26:09.960758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.174 [2024-07-15 10:26:09.960771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1948770 00:18:45.174 [2024-07-15 10:26:09.960780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.174 [2024-07-15 10:26:09.961720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.174 [2024-07-15 10:26:09.961743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.431 spare 00:18:45.431 10:26:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:45.431 [2024-07-15 10:26:10.133186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.431 [2024-07-15 10:26:10.133981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.431 [2024-07-15 10:26:10.134085] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x179d270 00:18:45.431 [2024-07-15 10:26:10.134094] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:45.431 [2024-07-15 10:26:10.134208] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19493c0 00:18:45.431 [2024-07-15 10:26:10.134295] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x179d270 00:18:45.431 [2024-07-15 10:26:10.134301] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x179d270 00:18:45.431 [2024-07-15 10:26:10.134358] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.431 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.688 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.688 "name": "raid_bdev1", 00:18:45.688 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:45.688 "strip_size_kb": 0, 00:18:45.688 "state": "online", 00:18:45.688 "raid_level": "raid1", 00:18:45.688 "superblock": true, 00:18:45.688 "num_base_bdevs": 2, 00:18:45.688 "num_base_bdevs_discovered": 2, 00:18:45.688 "num_base_bdevs_operational": 2, 00:18:45.688 "base_bdevs_list": [ 00:18:45.688 { 00:18:45.688 "name": "BaseBdev1", 00:18:45.688 "uuid": "3d28c58a-868a-510a-963b-4c20a17f39dd", 00:18:45.688 "is_configured": true, 00:18:45.688 "data_offset": 2048, 00:18:45.688 "data_size": 63488 00:18:45.688 }, 00:18:45.688 { 00:18:45.688 "name": "BaseBdev2", 00:18:45.688 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:45.688 "is_configured": true, 00:18:45.688 "data_offset": 2048, 00:18:45.688 "data_size": 63488 00:18:45.688 } 00:18:45.688 ] 00:18:45.688 }' 00:18:45.688 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.688 10:26:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.252 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:46.252 10:26:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:46.252 [2024-07-15 10:26:10.987539] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.252 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:18:46.252 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.252 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.509 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:46.767 [2024-07-15 10:26:11.348335] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19493c0 00:18:46.767 /dev/nbd0 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.767 1+0 records in 00:18:46.767 1+0 records out 00:18:46.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027389 s, 15.0 MB/s 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:18:46.767 10:26:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:50.954 63488+0 records in 00:18:50.954 63488+0 records out 00:18:50.954 32505856 bytes (33 MB, 31 MiB) copied, 3.87427 s, 8.4 MB/s 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:50.954 [2024-07-15 10:26:15.475500] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:50.954 [2024-07-15 10:26:15.639972] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.954 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.212 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.212 "name": "raid_bdev1", 00:18:51.212 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:51.212 "strip_size_kb": 0, 00:18:51.212 "state": "online", 00:18:51.212 "raid_level": "raid1", 00:18:51.212 "superblock": true, 00:18:51.212 "num_base_bdevs": 2, 00:18:51.212 "num_base_bdevs_discovered": 1, 00:18:51.212 "num_base_bdevs_operational": 1, 00:18:51.212 "base_bdevs_list": [ 00:18:51.212 { 00:18:51.212 "name": null, 00:18:51.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.212 "is_configured": false, 00:18:51.212 "data_offset": 2048, 00:18:51.212 "data_size": 63488 00:18:51.212 }, 00:18:51.212 { 00:18:51.212 "name": "BaseBdev2", 00:18:51.212 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:51.212 "is_configured": true, 00:18:51.212 "data_offset": 2048, 00:18:51.212 "data_size": 63488 00:18:51.212 } 00:18:51.212 ] 00:18:51.212 }' 00:18:51.212 10:26:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.212 10:26:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.778 10:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.778 [2024-07-15 10:26:16.462104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.778 [2024-07-15 10:26:16.466459] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193d8f0 00:18:51.778 [2024-07-15 10:26:16.468006] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.778 10:26:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.713 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.972 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:52.972 "name": "raid_bdev1", 00:18:52.972 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:52.972 "strip_size_kb": 0, 00:18:52.972 "state": "online", 00:18:52.972 "raid_level": "raid1", 00:18:52.972 "superblock": true, 00:18:52.972 "num_base_bdevs": 2, 00:18:52.972 "num_base_bdevs_discovered": 2, 00:18:52.972 "num_base_bdevs_operational": 2, 00:18:52.972 "process": { 00:18:52.972 "type": "rebuild", 00:18:52.972 "target": "spare", 00:18:52.972 "progress": { 00:18:52.972 "blocks": 22528, 00:18:52.972 "percent": 35 00:18:52.972 } 00:18:52.972 }, 00:18:52.972 "base_bdevs_list": [ 00:18:52.972 { 00:18:52.972 "name": "spare", 00:18:52.972 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:52.972 "is_configured": true, 00:18:52.972 "data_offset": 2048, 00:18:52.972 "data_size": 63488 00:18:52.972 }, 00:18:52.972 { 00:18:52.972 "name": "BaseBdev2", 00:18:52.972 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:52.972 "is_configured": true, 00:18:52.972 "data_offset": 2048, 00:18:52.972 "data_size": 63488 00:18:52.972 } 00:18:52.972 ] 00:18:52.972 }' 00:18:52.972 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:52.972 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.972 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:52.972 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.972 10:26:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:53.235 [2024-07-15 10:26:17.898566] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.235 [2024-07-15 10:26:17.978395] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.235 [2024-07-15 10:26:17.978430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.235 [2024-07-15 10:26:17.978440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.235 [2024-07-15 10:26:17.978445] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.235 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.545 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.545 "name": "raid_bdev1", 00:18:53.545 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:53.545 "strip_size_kb": 0, 00:18:53.545 "state": "online", 00:18:53.545 "raid_level": "raid1", 00:18:53.545 "superblock": true, 00:18:53.545 "num_base_bdevs": 2, 00:18:53.545 "num_base_bdevs_discovered": 1, 00:18:53.545 "num_base_bdevs_operational": 1, 00:18:53.545 "base_bdevs_list": [ 00:18:53.545 { 00:18:53.545 "name": null, 00:18:53.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.545 "is_configured": false, 00:18:53.545 "data_offset": 2048, 00:18:53.545 "data_size": 63488 00:18:53.545 }, 00:18:53.545 { 00:18:53.545 "name": "BaseBdev2", 00:18:53.545 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:53.545 "is_configured": true, 00:18:53.545 "data_offset": 2048, 00:18:53.545 "data_size": 63488 00:18:53.545 } 00:18:53.545 ] 00:18:53.545 }' 00:18:53.545 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.545 10:26:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.110 "name": "raid_bdev1", 00:18:54.110 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:54.110 "strip_size_kb": 0, 00:18:54.110 "state": "online", 00:18:54.110 "raid_level": "raid1", 00:18:54.110 "superblock": true, 00:18:54.110 "num_base_bdevs": 2, 00:18:54.110 "num_base_bdevs_discovered": 1, 00:18:54.110 "num_base_bdevs_operational": 1, 00:18:54.110 "base_bdevs_list": [ 00:18:54.110 { 00:18:54.110 "name": null, 00:18:54.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.110 "is_configured": false, 00:18:54.110 "data_offset": 2048, 00:18:54.110 "data_size": 63488 00:18:54.110 }, 00:18:54.110 { 00:18:54.110 "name": "BaseBdev2", 00:18:54.110 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:54.110 "is_configured": true, 00:18:54.110 "data_offset": 2048, 00:18:54.110 "data_size": 63488 00:18:54.110 } 00:18:54.110 ] 00:18:54.110 }' 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:54.110 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:54.366 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:54.366 10:26:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:54.366 [2024-07-15 10:26:19.065186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.366 [2024-07-15 10:26:19.069603] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193d8f0 00:18:54.366 [2024-07-15 10:26:19.070728] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:54.366 10:26:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.738 "name": "raid_bdev1", 00:18:55.738 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:55.738 "strip_size_kb": 0, 00:18:55.738 "state": "online", 00:18:55.738 "raid_level": "raid1", 00:18:55.738 "superblock": true, 00:18:55.738 "num_base_bdevs": 2, 00:18:55.738 "num_base_bdevs_discovered": 2, 00:18:55.738 "num_base_bdevs_operational": 2, 00:18:55.738 "process": { 00:18:55.738 "type": "rebuild", 00:18:55.738 "target": "spare", 00:18:55.738 "progress": { 00:18:55.738 "blocks": 22528, 00:18:55.738 "percent": 35 00:18:55.738 } 00:18:55.738 }, 00:18:55.738 "base_bdevs_list": [ 00:18:55.738 { 00:18:55.738 "name": "spare", 00:18:55.738 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:55.738 "is_configured": true, 00:18:55.738 "data_offset": 2048, 00:18:55.738 "data_size": 63488 00:18:55.738 }, 00:18:55.738 { 00:18:55.738 "name": "BaseBdev2", 00:18:55.738 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:55.738 "is_configured": true, 00:18:55.738 "data_offset": 2048, 00:18:55.738 "data_size": 63488 00:18:55.738 } 00:18:55.738 ] 00:18:55.738 }' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:18:55.738 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=601 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.738 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.996 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.996 "name": "raid_bdev1", 00:18:55.996 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:55.996 "strip_size_kb": 0, 00:18:55.996 "state": "online", 00:18:55.996 "raid_level": "raid1", 00:18:55.996 "superblock": true, 00:18:55.996 "num_base_bdevs": 2, 00:18:55.996 "num_base_bdevs_discovered": 2, 00:18:55.996 "num_base_bdevs_operational": 2, 00:18:55.996 "process": { 00:18:55.996 "type": "rebuild", 00:18:55.996 "target": "spare", 00:18:55.996 "progress": { 00:18:55.996 "blocks": 28672, 00:18:55.996 "percent": 45 00:18:55.996 } 00:18:55.996 }, 00:18:55.996 "base_bdevs_list": [ 00:18:55.996 { 00:18:55.996 "name": "spare", 00:18:55.996 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:55.996 "is_configured": true, 00:18:55.996 "data_offset": 2048, 00:18:55.996 "data_size": 63488 00:18:55.996 }, 00:18:55.996 { 00:18:55.996 "name": "BaseBdev2", 00:18:55.996 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:55.996 "is_configured": true, 00:18:55.996 "data_offset": 2048, 00:18:55.996 "data_size": 63488 00:18:55.996 } 00:18:55.996 ] 00:18:55.996 }' 00:18:55.996 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:55.996 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.996 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:55.996 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.996 10:26:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.930 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.188 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.188 "name": "raid_bdev1", 00:18:57.188 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:57.188 "strip_size_kb": 0, 00:18:57.188 "state": "online", 00:18:57.188 "raid_level": "raid1", 00:18:57.188 "superblock": true, 00:18:57.188 "num_base_bdevs": 2, 00:18:57.188 "num_base_bdevs_discovered": 2, 00:18:57.188 "num_base_bdevs_operational": 2, 00:18:57.188 "process": { 00:18:57.188 "type": "rebuild", 00:18:57.188 "target": "spare", 00:18:57.188 "progress": { 00:18:57.188 "blocks": 53248, 00:18:57.188 "percent": 83 00:18:57.188 } 00:18:57.188 }, 00:18:57.188 "base_bdevs_list": [ 00:18:57.188 { 00:18:57.188 "name": "spare", 00:18:57.188 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:57.188 "is_configured": true, 00:18:57.188 "data_offset": 2048, 00:18:57.188 "data_size": 63488 00:18:57.188 }, 00:18:57.188 { 00:18:57.188 "name": "BaseBdev2", 00:18:57.188 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:57.188 "is_configured": true, 00:18:57.188 "data_offset": 2048, 00:18:57.188 "data_size": 63488 00:18:57.188 } 00:18:57.188 ] 00:18:57.188 }' 00:18:57.188 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:57.188 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.188 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:57.188 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.188 10:26:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:57.447 [2024-07-15 10:26:22.192016] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:57.447 [2024-07-15 10:26:22.192060] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:57.447 [2024-07-15 10:26:22.192136] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.382 10:26:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.382 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.382 "name": "raid_bdev1", 00:18:58.382 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:58.382 "strip_size_kb": 0, 00:18:58.382 "state": "online", 00:18:58.382 "raid_level": "raid1", 00:18:58.382 "superblock": true, 00:18:58.382 "num_base_bdevs": 2, 00:18:58.382 "num_base_bdevs_discovered": 2, 00:18:58.382 "num_base_bdevs_operational": 2, 00:18:58.382 "base_bdevs_list": [ 00:18:58.382 { 00:18:58.382 "name": "spare", 00:18:58.382 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:58.382 "is_configured": true, 00:18:58.382 "data_offset": 2048, 00:18:58.382 "data_size": 63488 00:18:58.382 }, 00:18:58.382 { 00:18:58.382 "name": "BaseBdev2", 00:18:58.382 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:58.382 "is_configured": true, 00:18:58.382 "data_offset": 2048, 00:18:58.382 "data_size": 63488 00:18:58.382 } 00:18:58.382 ] 00:18:58.382 }' 00:18:58.382 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:58.382 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:58.382 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.383 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:58.647 "name": "raid_bdev1", 00:18:58.647 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:58.647 "strip_size_kb": 0, 00:18:58.647 "state": "online", 00:18:58.647 "raid_level": "raid1", 00:18:58.647 "superblock": true, 00:18:58.647 "num_base_bdevs": 2, 00:18:58.647 "num_base_bdevs_discovered": 2, 00:18:58.647 "num_base_bdevs_operational": 2, 00:18:58.647 "base_bdevs_list": [ 00:18:58.647 { 00:18:58.647 "name": "spare", 00:18:58.647 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:58.647 "is_configured": true, 00:18:58.647 "data_offset": 2048, 00:18:58.647 "data_size": 63488 00:18:58.647 }, 00:18:58.647 { 00:18:58.647 "name": "BaseBdev2", 00:18:58.647 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:58.647 "is_configured": true, 00:18:58.647 "data_offset": 2048, 00:18:58.647 "data_size": 63488 00:18:58.647 } 00:18:58.647 ] 00:18:58.647 }' 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.647 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.906 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.906 "name": "raid_bdev1", 00:18:58.906 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:18:58.906 "strip_size_kb": 0, 00:18:58.906 "state": "online", 00:18:58.906 "raid_level": "raid1", 00:18:58.906 "superblock": true, 00:18:58.906 "num_base_bdevs": 2, 00:18:58.906 "num_base_bdevs_discovered": 2, 00:18:58.906 "num_base_bdevs_operational": 2, 00:18:58.906 "base_bdevs_list": [ 00:18:58.906 { 00:18:58.906 "name": "spare", 00:18:58.906 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:18:58.906 "is_configured": true, 00:18:58.906 "data_offset": 2048, 00:18:58.906 "data_size": 63488 00:18:58.906 }, 00:18:58.906 { 00:18:58.906 "name": "BaseBdev2", 00:18:58.906 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:18:58.906 "is_configured": true, 00:18:58.906 "data_offset": 2048, 00:18:58.906 "data_size": 63488 00:18:58.906 } 00:18:58.906 ] 00:18:58.906 }' 00:18:58.906 10:26:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.906 10:26:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.472 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:59.472 [2024-07-15 10:26:24.225137] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.472 [2024-07-15 10:26:24.225162] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.472 [2024-07-15 10:26:24.225209] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.472 [2024-07-15 10:26:24.225249] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.472 [2024-07-15 10:26:24.225257] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x179d270 name raid_bdev1, state offline 00:18:59.472 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.472 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:59.731 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:59.989 /dev/nbd0 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.989 1+0 records in 00:18:59.989 1+0 records out 00:18:59.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150121 s, 27.3 MB/s 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:59.989 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:00.247 /dev/nbd1 00:19:00.247 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:00.247 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.248 1+0 records in 00:19:00.248 1+0 records out 00:19:00.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032144 s, 12.7 MB/s 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.248 10:26:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:00.506 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:00.765 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:01.023 [2024-07-15 10:26:25.610449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:01.023 [2024-07-15 10:26:25.610483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.023 [2024-07-15 10:26:25.610500] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x179c940 00:19:01.023 [2024-07-15 10:26:25.610509] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.024 [2024-07-15 10:26:25.611679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.024 [2024-07-15 10:26:25.611704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:01.024 [2024-07-15 10:26:25.611758] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:01.024 [2024-07-15 10:26:25.611776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.024 [2024-07-15 10:26:25.611848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.024 spare 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.024 [2024-07-15 10:26:25.712140] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x179f450 00:19:01.024 [2024-07-15 10:26:25.712151] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:01.024 [2024-07-15 10:26:25.712290] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x193d8f0 00:19:01.024 [2024-07-15 10:26:25.712397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x179f450 00:19:01.024 [2024-07-15 10:26:25.712403] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x179f450 00:19:01.024 [2024-07-15 10:26:25.712479] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.024 "name": "raid_bdev1", 00:19:01.024 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:01.024 "strip_size_kb": 0, 00:19:01.024 "state": "online", 00:19:01.024 "raid_level": "raid1", 00:19:01.024 "superblock": true, 00:19:01.024 "num_base_bdevs": 2, 00:19:01.024 "num_base_bdevs_discovered": 2, 00:19:01.024 "num_base_bdevs_operational": 2, 00:19:01.024 "base_bdevs_list": [ 00:19:01.024 { 00:19:01.024 "name": "spare", 00:19:01.024 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:19:01.024 "is_configured": true, 00:19:01.024 "data_offset": 2048, 00:19:01.024 "data_size": 63488 00:19:01.024 }, 00:19:01.024 { 00:19:01.024 "name": "BaseBdev2", 00:19:01.024 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:01.024 "is_configured": true, 00:19:01.024 "data_offset": 2048, 00:19:01.024 "data_size": 63488 00:19:01.024 } 00:19:01.024 ] 00:19:01.024 }' 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.024 10:26:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.590 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:01.848 "name": "raid_bdev1", 00:19:01.848 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:01.848 "strip_size_kb": 0, 00:19:01.848 "state": "online", 00:19:01.848 "raid_level": "raid1", 00:19:01.848 "superblock": true, 00:19:01.848 "num_base_bdevs": 2, 00:19:01.848 "num_base_bdevs_discovered": 2, 00:19:01.848 "num_base_bdevs_operational": 2, 00:19:01.848 "base_bdevs_list": [ 00:19:01.848 { 00:19:01.848 "name": "spare", 00:19:01.848 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:19:01.848 "is_configured": true, 00:19:01.848 "data_offset": 2048, 00:19:01.848 "data_size": 63488 00:19:01.848 }, 00:19:01.848 { 00:19:01.848 "name": "BaseBdev2", 00:19:01.848 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:01.848 "is_configured": true, 00:19:01.848 "data_offset": 2048, 00:19:01.848 "data_size": 63488 00:19:01.848 } 00:19:01.848 ] 00:19:01.848 }' 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.848 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:02.106 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.106 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:02.106 [2024-07-15 10:26:26.873746] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.106 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.106 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.106 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:02.106 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.107 10:26:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.365 10:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.365 "name": "raid_bdev1", 00:19:02.365 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:02.365 "strip_size_kb": 0, 00:19:02.365 "state": "online", 00:19:02.365 "raid_level": "raid1", 00:19:02.365 "superblock": true, 00:19:02.365 "num_base_bdevs": 2, 00:19:02.365 "num_base_bdevs_discovered": 1, 00:19:02.365 "num_base_bdevs_operational": 1, 00:19:02.365 "base_bdevs_list": [ 00:19:02.365 { 00:19:02.365 "name": null, 00:19:02.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.365 "is_configured": false, 00:19:02.365 "data_offset": 2048, 00:19:02.365 "data_size": 63488 00:19:02.365 }, 00:19:02.365 { 00:19:02.365 "name": "BaseBdev2", 00:19:02.365 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:02.366 "is_configured": true, 00:19:02.366 "data_offset": 2048, 00:19:02.366 "data_size": 63488 00:19:02.366 } 00:19:02.366 ] 00:19:02.366 }' 00:19:02.366 10:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.366 10:26:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.933 10:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.933 [2024-07-15 10:26:27.671807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.933 [2024-07-15 10:26:27.671936] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:02.933 [2024-07-15 10:26:27.671950] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:02.933 [2024-07-15 10:26:27.671973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.933 [2024-07-15 10:26:27.676300] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19401d0 00:19:02.933 [2024-07-15 10:26:27.677848] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.933 10:26:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.307 "name": "raid_bdev1", 00:19:04.307 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:04.307 "strip_size_kb": 0, 00:19:04.307 "state": "online", 00:19:04.307 "raid_level": "raid1", 00:19:04.307 "superblock": true, 00:19:04.307 "num_base_bdevs": 2, 00:19:04.307 "num_base_bdevs_discovered": 2, 00:19:04.307 "num_base_bdevs_operational": 2, 00:19:04.307 "process": { 00:19:04.307 "type": "rebuild", 00:19:04.307 "target": "spare", 00:19:04.307 "progress": { 00:19:04.307 "blocks": 22528, 00:19:04.307 "percent": 35 00:19:04.307 } 00:19:04.307 }, 00:19:04.307 "base_bdevs_list": [ 00:19:04.307 { 00:19:04.307 "name": "spare", 00:19:04.307 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:19:04.307 "is_configured": true, 00:19:04.307 "data_offset": 2048, 00:19:04.307 "data_size": 63488 00:19:04.307 }, 00:19:04.307 { 00:19:04.307 "name": "BaseBdev2", 00:19:04.307 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:04.307 "is_configured": true, 00:19:04.307 "data_offset": 2048, 00:19:04.307 "data_size": 63488 00:19:04.307 } 00:19:04.307 ] 00:19:04.307 }' 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.307 10:26:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:04.564 [2024-07-15 10:26:29.100830] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.564 [2024-07-15 10:26:29.188229] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:04.564 [2024-07-15 10:26:29.188259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.564 [2024-07-15 10:26:29.188273] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.564 [2024-07-15 10:26:29.188279] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.564 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.823 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.823 "name": "raid_bdev1", 00:19:04.823 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:04.823 "strip_size_kb": 0, 00:19:04.823 "state": "online", 00:19:04.823 "raid_level": "raid1", 00:19:04.823 "superblock": true, 00:19:04.823 "num_base_bdevs": 2, 00:19:04.823 "num_base_bdevs_discovered": 1, 00:19:04.823 "num_base_bdevs_operational": 1, 00:19:04.823 "base_bdevs_list": [ 00:19:04.823 { 00:19:04.823 "name": null, 00:19:04.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.823 "is_configured": false, 00:19:04.823 "data_offset": 2048, 00:19:04.823 "data_size": 63488 00:19:04.823 }, 00:19:04.823 { 00:19:04.823 "name": "BaseBdev2", 00:19:04.823 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:04.823 "is_configured": true, 00:19:04.823 "data_offset": 2048, 00:19:04.823 "data_size": 63488 00:19:04.823 } 00:19:04.823 ] 00:19:04.823 }' 00:19:04.823 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.823 10:26:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.080 10:26:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:05.338 [2024-07-15 10:26:30.022337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.338 [2024-07-15 10:26:30.022382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.338 [2024-07-15 10:26:30.022401] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1839c50 00:19:05.338 [2024-07-15 10:26:30.022410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.338 [2024-07-15 10:26:30.022706] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.338 [2024-07-15 10:26:30.022720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.338 [2024-07-15 10:26:30.022779] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:05.338 [2024-07-15 10:26:30.022787] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.338 [2024-07-15 10:26:30.022794] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:05.338 [2024-07-15 10:26:30.022807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:05.338 [2024-07-15 10:26:30.027170] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19401d0 00:19:05.338 spare 00:19:05.338 [2024-07-15 10:26:30.028315] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.338 10:26:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.295 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.552 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:06.552 "name": "raid_bdev1", 00:19:06.552 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:06.552 "strip_size_kb": 0, 00:19:06.552 "state": "online", 00:19:06.552 "raid_level": "raid1", 00:19:06.552 "superblock": true, 00:19:06.552 "num_base_bdevs": 2, 00:19:06.552 "num_base_bdevs_discovered": 2, 00:19:06.552 "num_base_bdevs_operational": 2, 00:19:06.552 "process": { 00:19:06.552 "type": "rebuild", 00:19:06.552 "target": "spare", 00:19:06.552 "progress": { 00:19:06.552 "blocks": 22528, 00:19:06.552 "percent": 35 00:19:06.552 } 00:19:06.552 }, 00:19:06.552 "base_bdevs_list": [ 00:19:06.552 { 00:19:06.552 "name": "spare", 00:19:06.552 "uuid": "8447b6ae-1a25-5baa-8887-9d971eb77149", 00:19:06.552 "is_configured": true, 00:19:06.552 "data_offset": 2048, 00:19:06.552 "data_size": 63488 00:19:06.552 }, 00:19:06.552 { 00:19:06.552 "name": "BaseBdev2", 00:19:06.552 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:06.552 "is_configured": true, 00:19:06.552 "data_offset": 2048, 00:19:06.552 "data_size": 63488 00:19:06.552 } 00:19:06.552 ] 00:19:06.552 }' 00:19:06.552 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:06.552 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.552 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:06.552 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.552 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:06.809 [2024-07-15 10:26:31.446796] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.809 [2024-07-15 10:26:31.538558] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:06.809 [2024-07-15 10:26:31.538591] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.809 [2024-07-15 10:26:31.538601] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:06.809 [2024-07-15 10:26:31.538606] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.809 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.067 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.067 "name": "raid_bdev1", 00:19:07.067 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:07.067 "strip_size_kb": 0, 00:19:07.067 "state": "online", 00:19:07.067 "raid_level": "raid1", 00:19:07.067 "superblock": true, 00:19:07.067 "num_base_bdevs": 2, 00:19:07.067 "num_base_bdevs_discovered": 1, 00:19:07.067 "num_base_bdevs_operational": 1, 00:19:07.067 "base_bdevs_list": [ 00:19:07.067 { 00:19:07.067 "name": null, 00:19:07.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.067 "is_configured": false, 00:19:07.067 "data_offset": 2048, 00:19:07.067 "data_size": 63488 00:19:07.067 }, 00:19:07.067 { 00:19:07.067 "name": "BaseBdev2", 00:19:07.067 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:07.067 "is_configured": true, 00:19:07.067 "data_offset": 2048, 00:19:07.067 "data_size": 63488 00:19:07.067 } 00:19:07.067 ] 00:19:07.067 }' 00:19:07.067 10:26:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.067 10:26:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:07.670 "name": "raid_bdev1", 00:19:07.670 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:07.670 "strip_size_kb": 0, 00:19:07.670 "state": "online", 00:19:07.670 "raid_level": "raid1", 00:19:07.670 "superblock": true, 00:19:07.670 "num_base_bdevs": 2, 00:19:07.670 "num_base_bdevs_discovered": 1, 00:19:07.670 "num_base_bdevs_operational": 1, 00:19:07.670 "base_bdevs_list": [ 00:19:07.670 { 00:19:07.670 "name": null, 00:19:07.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.670 "is_configured": false, 00:19:07.670 "data_offset": 2048, 00:19:07.670 "data_size": 63488 00:19:07.670 }, 00:19:07.670 { 00:19:07.670 "name": "BaseBdev2", 00:19:07.670 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:07.670 "is_configured": true, 00:19:07.670 "data_offset": 2048, 00:19:07.670 "data_size": 63488 00:19:07.670 } 00:19:07.670 ] 00:19:07.670 }' 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:07.670 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:07.928 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:07.928 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:07.928 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.186 [2024-07-15 10:26:32.797540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.186 [2024-07-15 10:26:32.797578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.186 [2024-07-15 10:26:32.797612] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x179f7d0 00:19:08.186 [2024-07-15 10:26:32.797620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.186 [2024-07-15 10:26:32.797882] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.186 [2024-07-15 10:26:32.797895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.186 [2024-07-15 10:26:32.797951] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:08.186 [2024-07-15 10:26:32.797960] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.186 [2024-07-15 10:26:32.797971] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.186 BaseBdev1 00:19:08.186 10:26:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.120 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.379 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.379 "name": "raid_bdev1", 00:19:09.379 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:09.379 "strip_size_kb": 0, 00:19:09.379 "state": "online", 00:19:09.379 "raid_level": "raid1", 00:19:09.379 "superblock": true, 00:19:09.379 "num_base_bdevs": 2, 00:19:09.379 "num_base_bdevs_discovered": 1, 00:19:09.379 "num_base_bdevs_operational": 1, 00:19:09.379 "base_bdevs_list": [ 00:19:09.379 { 00:19:09.379 "name": null, 00:19:09.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.379 "is_configured": false, 00:19:09.379 "data_offset": 2048, 00:19:09.379 "data_size": 63488 00:19:09.379 }, 00:19:09.379 { 00:19:09.379 "name": "BaseBdev2", 00:19:09.379 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:09.379 "is_configured": true, 00:19:09.379 "data_offset": 2048, 00:19:09.379 "data_size": 63488 00:19:09.379 } 00:19:09.379 ] 00:19:09.379 }' 00:19:09.379 10:26:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.379 10:26:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.947 "name": "raid_bdev1", 00:19:09.947 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:09.947 "strip_size_kb": 0, 00:19:09.947 "state": "online", 00:19:09.947 "raid_level": "raid1", 00:19:09.947 "superblock": true, 00:19:09.947 "num_base_bdevs": 2, 00:19:09.947 "num_base_bdevs_discovered": 1, 00:19:09.947 "num_base_bdevs_operational": 1, 00:19:09.947 "base_bdevs_list": [ 00:19:09.947 { 00:19:09.947 "name": null, 00:19:09.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.947 "is_configured": false, 00:19:09.947 "data_offset": 2048, 00:19:09.947 "data_size": 63488 00:19:09.947 }, 00:19:09.947 { 00:19:09.947 "name": "BaseBdev2", 00:19:09.947 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:09.947 "is_configured": true, 00:19:09.947 "data_offset": 2048, 00:19:09.947 "data_size": 63488 00:19:09.947 } 00:19:09.947 ] 00:19:09.947 }' 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.947 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:10.206 [2024-07-15 10:26:34.899001] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.206 [2024-07-15 10:26:34.899100] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.206 [2024-07-15 10:26:34.899111] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:10.206 request: 00:19:10.206 { 00:19:10.206 "base_bdev": "BaseBdev1", 00:19:10.206 "raid_bdev": "raid_bdev1", 00:19:10.206 "method": "bdev_raid_add_base_bdev", 00:19:10.206 "req_id": 1 00:19:10.206 } 00:19:10.206 Got JSON-RPC error response 00:19:10.206 response: 00:19:10.206 { 00:19:10.206 "code": -22, 00:19:10.206 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:10.206 } 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:10.206 10:26:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:11.142 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.142 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:11.142 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.143 10:26:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.402 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.402 "name": "raid_bdev1", 00:19:11.402 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:11.402 "strip_size_kb": 0, 00:19:11.402 "state": "online", 00:19:11.402 "raid_level": "raid1", 00:19:11.402 "superblock": true, 00:19:11.402 "num_base_bdevs": 2, 00:19:11.402 "num_base_bdevs_discovered": 1, 00:19:11.402 "num_base_bdevs_operational": 1, 00:19:11.402 "base_bdevs_list": [ 00:19:11.402 { 00:19:11.402 "name": null, 00:19:11.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.402 "is_configured": false, 00:19:11.402 "data_offset": 2048, 00:19:11.402 "data_size": 63488 00:19:11.402 }, 00:19:11.402 { 00:19:11.402 "name": "BaseBdev2", 00:19:11.402 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:11.402 "is_configured": true, 00:19:11.402 "data_offset": 2048, 00:19:11.402 "data_size": 63488 00:19:11.402 } 00:19:11.402 ] 00:19:11.402 }' 00:19:11.402 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.402 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:11.970 "name": "raid_bdev1", 00:19:11.970 "uuid": "2ef8a04e-15ce-4990-b342-13220ac811d0", 00:19:11.970 "strip_size_kb": 0, 00:19:11.970 "state": "online", 00:19:11.970 "raid_level": "raid1", 00:19:11.970 "superblock": true, 00:19:11.970 "num_base_bdevs": 2, 00:19:11.970 "num_base_bdevs_discovered": 1, 00:19:11.970 "num_base_bdevs_operational": 1, 00:19:11.970 "base_bdevs_list": [ 00:19:11.970 { 00:19:11.970 "name": null, 00:19:11.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.970 "is_configured": false, 00:19:11.970 "data_offset": 2048, 00:19:11.970 "data_size": 63488 00:19:11.970 }, 00:19:11.970 { 00:19:11.970 "name": "BaseBdev2", 00:19:11.970 "uuid": "53439cda-4720-5dad-8a7a-67963162e9fc", 00:19:11.970 "is_configured": true, 00:19:11.970 "data_offset": 2048, 00:19:11.970 "data_size": 63488 00:19:11.970 } 00:19:11.970 ] 00:19:11.970 }' 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:11.970 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 1851654 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1851654 ']' 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 1851654 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851654 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851654' 00:19:12.229 killing process with pid 1851654 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 1851654 00:19:12.229 Received shutdown signal, test time was about 60.000000 seconds 00:19:12.229 00:19:12.229 Latency(us) 00:19:12.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.229 =================================================================================================================== 00:19:12.229 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.229 [2024-07-15 10:26:36.856632] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.229 [2024-07-15 10:26:36.856698] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:12.229 [2024-07-15 10:26:36.856728] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:12.229 [2024-07-15 10:26:36.856736] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x179f450 name raid_bdev1, state offline 00:19:12.229 10:26:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 1851654 00:19:12.229 [2024-07-15 10:26:36.880529] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:19:12.488 00:19:12.488 real 0m29.125s 00:19:12.488 user 0m40.955s 00:19:12.488 sys 0m5.494s 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.488 ************************************ 00:19:12.488 END TEST raid_rebuild_test_sb 00:19:12.488 ************************************ 00:19:12.488 10:26:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:12.488 10:26:37 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:19:12.488 10:26:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:12.488 10:26:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.488 10:26:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:12.488 ************************************ 00:19:12.488 START TEST raid_rebuild_test_io 00:19:12.488 ************************************ 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:12.488 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1856973 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1856973 /var/tmp/spdk-raid.sock 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 1856973 ']' 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:12.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.489 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:12.489 [2024-07-15 10:26:37.180192] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:12.489 [2024-07-15 10:26:37.180235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856973 ] 00:19:12.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:12.489 Zero copy mechanism will not be used. 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:12.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:12.489 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:12.489 [2024-07-15 10:26:37.271876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.748 [2024-07-15 10:26:37.346406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.748 [2024-07-15 10:26:37.402980] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:12.748 [2024-07-15 10:26:37.403007] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:13.316 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.316 10:26:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:19:13.316 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:13.316 10:26:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:13.575 BaseBdev1_malloc 00:19:13.575 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:13.575 [2024-07-15 10:26:38.267826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:13.575 [2024-07-15 10:26:38.267861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.575 [2024-07-15 10:26:38.267877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e465f0 00:19:13.575 [2024-07-15 10:26:38.267904] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.575 [2024-07-15 10:26:38.268992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.575 [2024-07-15 10:26:38.269013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:13.575 BaseBdev1 00:19:13.575 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:13.575 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:13.835 BaseBdev2_malloc 00:19:13.835 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:13.835 [2024-07-15 10:26:38.592326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:13.835 [2024-07-15 10:26:38.592358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:13.835 [2024-07-15 10:26:38.592377] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fea130 00:19:13.835 [2024-07-15 10:26:38.592385] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:13.835 [2024-07-15 10:26:38.593567] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:13.835 [2024-07-15 10:26:38.593589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:13.835 BaseBdev2 00:19:13.835 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:14.094 spare_malloc 00:19:14.094 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:14.353 spare_delay 00:19:14.353 10:26:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:14.353 [2024-07-15 10:26:39.093101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:14.353 [2024-07-15 10:26:39.093134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:14.353 [2024-07-15 10:26:39.093150] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fe9770 00:19:14.353 [2024-07-15 10:26:39.093174] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:14.354 [2024-07-15 10:26:39.094216] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:14.354 [2024-07-15 10:26:39.094239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:14.354 spare 00:19:14.354 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:14.613 [2024-07-15 10:26:39.249518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.613 [2024-07-15 10:26:39.250407] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.613 [2024-07-15 10:26:39.250459] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e3e270 00:19:14.613 [2024-07-15 10:26:39.250466] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:14.613 [2024-07-15 10:26:39.250604] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fea3c0 00:19:14.613 [2024-07-15 10:26:39.250700] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e3e270 00:19:14.613 [2024-07-15 10:26:39.250707] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e3e270 00:19:14.613 [2024-07-15 10:26:39.250781] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.613 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.872 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.872 "name": "raid_bdev1", 00:19:14.872 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:14.872 "strip_size_kb": 0, 00:19:14.872 "state": "online", 00:19:14.872 "raid_level": "raid1", 00:19:14.872 "superblock": false, 00:19:14.872 "num_base_bdevs": 2, 00:19:14.872 "num_base_bdevs_discovered": 2, 00:19:14.872 "num_base_bdevs_operational": 2, 00:19:14.872 "base_bdevs_list": [ 00:19:14.872 { 00:19:14.872 "name": "BaseBdev1", 00:19:14.872 "uuid": "610a441c-0920-5bac-b7ee-0554eabe350f", 00:19:14.872 "is_configured": true, 00:19:14.872 "data_offset": 0, 00:19:14.872 "data_size": 65536 00:19:14.872 }, 00:19:14.872 { 00:19:14.872 "name": "BaseBdev2", 00:19:14.872 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:14.872 "is_configured": true, 00:19:14.872 "data_offset": 0, 00:19:14.872 "data_size": 65536 00:19:14.872 } 00:19:14.872 ] 00:19:14.872 }' 00:19:14.872 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.872 10:26:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.441 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:15.441 10:26:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:15.441 [2024-07-15 10:26:40.083812] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.441 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:19:15.441 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.441 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:15.700 [2024-07-15 10:26:40.366233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fde8f0 00:19:15.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:15.700 Zero copy mechanism will not be used. 00:19:15.700 Running I/O for 60 seconds... 00:19:15.700 [2024-07-15 10:26:40.438884] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.700 [2024-07-15 10:26:40.444106] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fde8f0 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.700 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.959 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.959 "name": "raid_bdev1", 00:19:15.960 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:15.960 "strip_size_kb": 0, 00:19:15.960 "state": "online", 00:19:15.960 "raid_level": "raid1", 00:19:15.960 "superblock": false, 00:19:15.960 "num_base_bdevs": 2, 00:19:15.960 "num_base_bdevs_discovered": 1, 00:19:15.960 "num_base_bdevs_operational": 1, 00:19:15.960 "base_bdevs_list": [ 00:19:15.960 { 00:19:15.960 "name": null, 00:19:15.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.960 "is_configured": false, 00:19:15.960 "data_offset": 0, 00:19:15.960 "data_size": 65536 00:19:15.960 }, 00:19:15.960 { 00:19:15.960 "name": "BaseBdev2", 00:19:15.960 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:15.960 "is_configured": true, 00:19:15.960 "data_offset": 0, 00:19:15.960 "data_size": 65536 00:19:15.960 } 00:19:15.960 ] 00:19:15.960 }' 00:19:15.960 10:26:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.960 10:26:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.526 10:26:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.526 [2024-07-15 10:26:41.277349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.785 10:26:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:16.785 [2024-07-15 10:26:41.316495] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b44b40 00:19:16.785 [2024-07-15 10:26:41.318168] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.785 [2024-07-15 10:26:41.430892] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:16.785 [2024-07-15 10:26:41.431247] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:17.044 [2024-07-15 10:26:41.643513] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:17.044 [2024-07-15 10:26:41.643675] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:17.303 [2024-07-15 10:26:41.879346] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:17.562 [2024-07-15 10:26:42.097259] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:17.562 [2024-07-15 10:26:42.325688] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.562 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.821 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.821 "name": "raid_bdev1", 00:19:17.821 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:17.821 "strip_size_kb": 0, 00:19:17.821 "state": "online", 00:19:17.821 "raid_level": "raid1", 00:19:17.821 "superblock": false, 00:19:17.821 "num_base_bdevs": 2, 00:19:17.821 "num_base_bdevs_discovered": 2, 00:19:17.821 "num_base_bdevs_operational": 2, 00:19:17.821 "process": { 00:19:17.821 "type": "rebuild", 00:19:17.821 "target": "spare", 00:19:17.821 "progress": { 00:19:17.821 "blocks": 14336, 00:19:17.821 "percent": 21 00:19:17.821 } 00:19:17.821 }, 00:19:17.821 "base_bdevs_list": [ 00:19:17.821 { 00:19:17.821 "name": "spare", 00:19:17.821 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:17.821 "is_configured": true, 00:19:17.821 "data_offset": 0, 00:19:17.821 "data_size": 65536 00:19:17.821 }, 00:19:17.821 { 00:19:17.821 "name": "BaseBdev2", 00:19:17.821 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:17.821 "is_configured": true, 00:19:17.821 "data_offset": 0, 00:19:17.821 "data_size": 65536 00:19:17.821 } 00:19:17.821 ] 00:19:17.821 }' 00:19:17.821 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:17.821 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.821 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:17.821 [2024-07-15 10:26:42.550702] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:17.821 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.821 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:18.080 [2024-07-15 10:26:42.741171] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.340 [2024-07-15 10:26:42.891825] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:18.340 [2024-07-15 10:26:42.893483] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.340 [2024-07-15 10:26:42.893503] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.340 [2024-07-15 10:26:42.893509] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:18.340 [2024-07-15 10:26:42.908673] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fde8f0 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.340 10:26:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.340 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.340 "name": "raid_bdev1", 00:19:18.340 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:18.340 "strip_size_kb": 0, 00:19:18.340 "state": "online", 00:19:18.340 "raid_level": "raid1", 00:19:18.340 "superblock": false, 00:19:18.340 "num_base_bdevs": 2, 00:19:18.340 "num_base_bdevs_discovered": 1, 00:19:18.340 "num_base_bdevs_operational": 1, 00:19:18.340 "base_bdevs_list": [ 00:19:18.340 { 00:19:18.340 "name": null, 00:19:18.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.340 "is_configured": false, 00:19:18.340 "data_offset": 0, 00:19:18.340 "data_size": 65536 00:19:18.340 }, 00:19:18.340 { 00:19:18.340 "name": "BaseBdev2", 00:19:18.340 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:18.340 "is_configured": true, 00:19:18.340 "data_offset": 0, 00:19:18.340 "data_size": 65536 00:19:18.340 } 00:19:18.340 ] 00:19:18.340 }' 00:19:18.340 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.340 10:26:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.908 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.167 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.167 "name": "raid_bdev1", 00:19:19.167 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:19.167 "strip_size_kb": 0, 00:19:19.167 "state": "online", 00:19:19.167 "raid_level": "raid1", 00:19:19.167 "superblock": false, 00:19:19.167 "num_base_bdevs": 2, 00:19:19.167 "num_base_bdevs_discovered": 1, 00:19:19.167 "num_base_bdevs_operational": 1, 00:19:19.167 "base_bdevs_list": [ 00:19:19.167 { 00:19:19.167 "name": null, 00:19:19.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.167 "is_configured": false, 00:19:19.167 "data_offset": 0, 00:19:19.167 "data_size": 65536 00:19:19.167 }, 00:19:19.167 { 00:19:19.167 "name": "BaseBdev2", 00:19:19.167 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:19.167 "is_configured": true, 00:19:19.167 "data_offset": 0, 00:19:19.167 "data_size": 65536 00:19:19.167 } 00:19:19.167 ] 00:19:19.167 }' 00:19:19.167 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:19.167 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:19.167 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:19.167 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:19.167 10:26:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:19.426 [2024-07-15 10:26:44.051261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.426 10:26:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:19.426 [2024-07-15 10:26:44.090880] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e45ea0 00:19:19.426 [2024-07-15 10:26:44.091944] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.426 [2024-07-15 10:26:44.199245] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:19.426 [2024-07-15 10:26:44.199499] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:19.686 [2024-07-15 10:26:44.311053] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:19.686 [2024-07-15 10:26:44.311165] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:19.946 [2024-07-15 10:26:44.636709] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:19.946 [2024-07-15 10:26:44.637073] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:20.206 [2024-07-15 10:26:44.759132] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:20.466 [2024-07-15 10:26:45.074170] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.466 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.795 "name": "raid_bdev1", 00:19:20.795 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:20.795 "strip_size_kb": 0, 00:19:20.795 "state": "online", 00:19:20.795 "raid_level": "raid1", 00:19:20.795 "superblock": false, 00:19:20.795 "num_base_bdevs": 2, 00:19:20.795 "num_base_bdevs_discovered": 2, 00:19:20.795 "num_base_bdevs_operational": 2, 00:19:20.795 "process": { 00:19:20.795 "type": "rebuild", 00:19:20.795 "target": "spare", 00:19:20.795 "progress": { 00:19:20.795 "blocks": 14336, 00:19:20.795 "percent": 21 00:19:20.795 } 00:19:20.795 }, 00:19:20.795 "base_bdevs_list": [ 00:19:20.795 { 00:19:20.795 "name": "spare", 00:19:20.795 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:20.795 "is_configured": true, 00:19:20.795 "data_offset": 0, 00:19:20.795 "data_size": 65536 00:19:20.795 }, 00:19:20.795 { 00:19:20.795 "name": "BaseBdev2", 00:19:20.795 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:20.795 "is_configured": true, 00:19:20.795 "data_offset": 0, 00:19:20.795 "data_size": 65536 00:19:20.795 } 00:19:20.795 ] 00:19:20.795 }' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:20.795 [2024-07-15 10:26:45.288156] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=626 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.795 [2024-07-15 10:26:45.513906] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:20.795 "name": "raid_bdev1", 00:19:20.795 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:20.795 "strip_size_kb": 0, 00:19:20.795 "state": "online", 00:19:20.795 "raid_level": "raid1", 00:19:20.795 "superblock": false, 00:19:20.795 "num_base_bdevs": 2, 00:19:20.795 "num_base_bdevs_discovered": 2, 00:19:20.795 "num_base_bdevs_operational": 2, 00:19:20.795 "process": { 00:19:20.795 "type": "rebuild", 00:19:20.795 "target": "spare", 00:19:20.795 "progress": { 00:19:20.795 "blocks": 18432, 00:19:20.795 "percent": 28 00:19:20.795 } 00:19:20.795 }, 00:19:20.795 "base_bdevs_list": [ 00:19:20.795 { 00:19:20.795 "name": "spare", 00:19:20.795 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:20.795 "is_configured": true, 00:19:20.795 "data_offset": 0, 00:19:20.795 "data_size": 65536 00:19:20.795 }, 00:19:20.795 { 00:19:20.795 "name": "BaseBdev2", 00:19:20.795 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:20.795 "is_configured": true, 00:19:20.795 "data_offset": 0, 00:19:20.795 "data_size": 65536 00:19:20.795 } 00:19:20.795 ] 00:19:20.795 }' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.795 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:21.068 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.068 10:26:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:21.068 [2024-07-15 10:26:45.727108] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:21.068 [2024-07-15 10:26:45.727322] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:21.327 [2024-07-15 10:26:45.974208] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:21.585 [2024-07-15 10:26:46.304119] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:21.585 [2024-07-15 10:26:46.304424] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:21.845 [2024-07-15 10:26:46.521169] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:21.845 [2024-07-15 10:26:46.521269] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.845 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.104 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:22.104 "name": "raid_bdev1", 00:19:22.104 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:22.104 "strip_size_kb": 0, 00:19:22.104 "state": "online", 00:19:22.104 "raid_level": "raid1", 00:19:22.104 "superblock": false, 00:19:22.104 "num_base_bdevs": 2, 00:19:22.104 "num_base_bdevs_discovered": 2, 00:19:22.104 "num_base_bdevs_operational": 2, 00:19:22.104 "process": { 00:19:22.104 "type": "rebuild", 00:19:22.104 "target": "spare", 00:19:22.104 "progress": { 00:19:22.104 "blocks": 36864, 00:19:22.104 "percent": 56 00:19:22.104 } 00:19:22.104 }, 00:19:22.104 "base_bdevs_list": [ 00:19:22.104 { 00:19:22.104 "name": "spare", 00:19:22.104 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:22.104 "is_configured": true, 00:19:22.104 "data_offset": 0, 00:19:22.104 "data_size": 65536 00:19:22.104 }, 00:19:22.104 { 00:19:22.104 "name": "BaseBdev2", 00:19:22.104 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:22.104 "is_configured": true, 00:19:22.104 "data_offset": 0, 00:19:22.104 "data_size": 65536 00:19:22.104 } 00:19:22.104 ] 00:19:22.104 }' 00:19:22.104 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:22.104 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.104 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:22.104 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.104 10:26:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:22.671 [2024-07-15 10:26:47.308835] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:22.930 [2024-07-15 10:26:47.640219] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.188 10:26:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.451 10:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:23.451 "name": "raid_bdev1", 00:19:23.451 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:23.451 "strip_size_kb": 0, 00:19:23.451 "state": "online", 00:19:23.451 "raid_level": "raid1", 00:19:23.451 "superblock": false, 00:19:23.451 "num_base_bdevs": 2, 00:19:23.451 "num_base_bdevs_discovered": 2, 00:19:23.451 "num_base_bdevs_operational": 2, 00:19:23.451 "process": { 00:19:23.451 "type": "rebuild", 00:19:23.451 "target": "spare", 00:19:23.451 "progress": { 00:19:23.451 "blocks": 59392, 00:19:23.451 "percent": 90 00:19:23.451 } 00:19:23.451 }, 00:19:23.451 "base_bdevs_list": [ 00:19:23.451 { 00:19:23.451 "name": "spare", 00:19:23.451 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:23.451 "is_configured": true, 00:19:23.451 "data_offset": 0, 00:19:23.451 "data_size": 65536 00:19:23.451 }, 00:19:23.451 { 00:19:23.451 "name": "BaseBdev2", 00:19:23.451 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:23.451 "is_configured": true, 00:19:23.451 "data_offset": 0, 00:19:23.451 "data_size": 65536 00:19:23.451 } 00:19:23.451 ] 00:19:23.451 }' 00:19:23.451 10:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:23.451 10:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.451 10:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:23.451 10:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.451 10:26:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:23.712 [2024-07-15 10:26:48.279058] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:23.712 [2024-07-15 10:26:48.379287] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:23.712 [2024-07-15 10:26:48.386168] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:24.647 "name": "raid_bdev1", 00:19:24.647 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:24.647 "strip_size_kb": 0, 00:19:24.647 "state": "online", 00:19:24.647 "raid_level": "raid1", 00:19:24.647 "superblock": false, 00:19:24.647 "num_base_bdevs": 2, 00:19:24.647 "num_base_bdevs_discovered": 2, 00:19:24.647 "num_base_bdevs_operational": 2, 00:19:24.647 "base_bdevs_list": [ 00:19:24.647 { 00:19:24.647 "name": "spare", 00:19:24.647 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:24.647 "is_configured": true, 00:19:24.647 "data_offset": 0, 00:19:24.647 "data_size": 65536 00:19:24.647 }, 00:19:24.647 { 00:19:24.647 "name": "BaseBdev2", 00:19:24.647 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:24.647 "is_configured": true, 00:19:24.647 "data_offset": 0, 00:19:24.647 "data_size": 65536 00:19:24.647 } 00:19:24.647 ] 00:19:24.647 }' 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.647 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:24.906 "name": "raid_bdev1", 00:19:24.906 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:24.906 "strip_size_kb": 0, 00:19:24.906 "state": "online", 00:19:24.906 "raid_level": "raid1", 00:19:24.906 "superblock": false, 00:19:24.906 "num_base_bdevs": 2, 00:19:24.906 "num_base_bdevs_discovered": 2, 00:19:24.906 "num_base_bdevs_operational": 2, 00:19:24.906 "base_bdevs_list": [ 00:19:24.906 { 00:19:24.906 "name": "spare", 00:19:24.906 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:24.906 "is_configured": true, 00:19:24.906 "data_offset": 0, 00:19:24.906 "data_size": 65536 00:19:24.906 }, 00:19:24.906 { 00:19:24.906 "name": "BaseBdev2", 00:19:24.906 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:24.906 "is_configured": true, 00:19:24.906 "data_offset": 0, 00:19:24.906 "data_size": 65536 00:19:24.906 } 00:19:24.906 ] 00:19:24.906 }' 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.906 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.164 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.164 "name": "raid_bdev1", 00:19:25.164 "uuid": "e4b2a85d-e0c6-4927-9a82-ad2aebb9431b", 00:19:25.164 "strip_size_kb": 0, 00:19:25.164 "state": "online", 00:19:25.164 "raid_level": "raid1", 00:19:25.164 "superblock": false, 00:19:25.164 "num_base_bdevs": 2, 00:19:25.164 "num_base_bdevs_discovered": 2, 00:19:25.164 "num_base_bdevs_operational": 2, 00:19:25.164 "base_bdevs_list": [ 00:19:25.164 { 00:19:25.164 "name": "spare", 00:19:25.164 "uuid": "3af63f8a-7932-5e23-9529-a8cfbb08244c", 00:19:25.164 "is_configured": true, 00:19:25.164 "data_offset": 0, 00:19:25.164 "data_size": 65536 00:19:25.164 }, 00:19:25.164 { 00:19:25.164 "name": "BaseBdev2", 00:19:25.164 "uuid": "38c9953b-a674-5dc4-812d-17713a07566f", 00:19:25.164 "is_configured": true, 00:19:25.164 "data_offset": 0, 00:19:25.164 "data_size": 65536 00:19:25.164 } 00:19:25.164 ] 00:19:25.164 }' 00:19:25.164 10:26:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.164 10:26:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:25.731 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:25.732 [2024-07-15 10:26:50.476375] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.732 [2024-07-15 10:26:50.476398] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.989 00:19:25.989 Latency(us) 00:19:25.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.989 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:25.989 raid_bdev1 : 10.14 120.66 361.98 0.00 0.00 11107.32 242.48 112407.35 00:19:25.989 =================================================================================================================== 00:19:25.989 Total : 120.66 361.98 0.00 0.00 11107.32 242.48 112407.35 00:19:25.989 [2024-07-15 10:26:50.531226] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.989 [2024-07-15 10:26:50.531260] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.989 [2024-07-15 10:26:50.531309] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.989 [2024-07-15 10:26:50.531317] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e3e270 name raid_bdev1, state offline 00:19:25.989 0 00:19:25.989 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.989 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.990 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:26.249 /dev/nbd0 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.249 1+0 records in 00:19:26.249 1+0 records out 00:19:26.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240878 s, 17.0 MB/s 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.249 10:26:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:26.508 /dev/nbd1 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.508 1+0 records in 00:19:26.508 1+0 records out 00:19:26.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260074 s, 15.7 MB/s 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.508 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.766 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 1856973 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 1856973 ']' 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 1856973 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1856973 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1856973' 00:19:27.026 killing process with pid 1856973 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 1856973 00:19:27.026 Received shutdown signal, test time was about 11.275736 seconds 00:19:27.026 00:19:27.026 Latency(us) 00:19:27.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.026 =================================================================================================================== 00:19:27.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.026 [2024-07-15 10:26:51.671070] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:27.026 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 1856973 00:19:27.026 [2024-07-15 10:26:51.690030] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:19:27.285 00:19:27.285 real 0m14.731s 00:19:27.285 user 0m21.569s 00:19:27.285 sys 0m2.297s 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.285 ************************************ 00:19:27.285 END TEST raid_rebuild_test_io 00:19:27.285 ************************************ 00:19:27.285 10:26:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:27.285 10:26:51 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:19:27.285 10:26:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:27.285 10:26:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.285 10:26:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.285 ************************************ 00:19:27.285 START TEST raid_rebuild_test_sb_io 00:19:27.285 ************************************ 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:27.285 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1859785 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1859785 /var/tmp/spdk-raid.sock 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 1859785 ']' 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.286 10:26:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.286 [2024-07-15 10:26:52.018091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:27.286 [2024-07-15 10:26:52.018138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1859785 ] 00:19:27.286 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:27.286 Zero copy mechanism will not be used. 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:27.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:27.286 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:27.545 [2024-07-15 10:26:52.110397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.545 [2024-07-15 10:26:52.189757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.545 [2024-07-15 10:26:52.242153] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.545 [2024-07-15 10:26:52.242176] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.112 10:26:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.112 10:26:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:19:28.112 10:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:28.112 10:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:28.371 BaseBdev1_malloc 00:19:28.371 10:26:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:28.371 [2024-07-15 10:26:53.134162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:28.371 [2024-07-15 10:26:53.134197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.371 [2024-07-15 10:26:53.134212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12125f0 00:19:28.371 [2024-07-15 10:26:53.134221] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.371 [2024-07-15 10:26:53.135313] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.371 [2024-07-15 10:26:53.135336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:28.371 BaseBdev1 00:19:28.371 10:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:28.371 10:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:28.629 BaseBdev2_malloc 00:19:28.629 10:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:28.887 [2024-07-15 10:26:53.482946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:28.887 [2024-07-15 10:26:53.482979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.887 [2024-07-15 10:26:53.482993] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b6130 00:19:28.887 [2024-07-15 10:26:53.483018] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.887 [2024-07-15 10:26:53.484082] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.887 [2024-07-15 10:26:53.484104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:28.887 BaseBdev2 00:19:28.887 10:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:28.887 spare_malloc 00:19:28.887 10:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:29.144 spare_delay 00:19:29.144 10:26:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:29.403 [2024-07-15 10:26:54.011868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:29.403 [2024-07-15 10:26:54.011908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.403 [2024-07-15 10:26:54.011923] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13b5770 00:19:29.403 [2024-07-15 10:26:54.011948] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.403 [2024-07-15 10:26:54.013038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.403 [2024-07-15 10:26:54.013059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:29.403 spare 00:19:29.403 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:29.403 [2024-07-15 10:26:54.180322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.403 [2024-07-15 10:26:54.181178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.403 [2024-07-15 10:26:54.181296] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x120a270 00:19:29.403 [2024-07-15 10:26:54.181305] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:29.403 [2024-07-15 10:26:54.181426] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13b63c0 00:19:29.403 [2024-07-15 10:26:54.181514] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x120a270 00:19:29.403 [2024-07-15 10:26:54.181520] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x120a270 00:19:29.403 [2024-07-15 10:26:54.181581] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.661 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.661 "name": "raid_bdev1", 00:19:29.661 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:29.661 "strip_size_kb": 0, 00:19:29.661 "state": "online", 00:19:29.661 "raid_level": "raid1", 00:19:29.661 "superblock": true, 00:19:29.661 "num_base_bdevs": 2, 00:19:29.661 "num_base_bdevs_discovered": 2, 00:19:29.661 "num_base_bdevs_operational": 2, 00:19:29.661 "base_bdevs_list": [ 00:19:29.661 { 00:19:29.661 "name": "BaseBdev1", 00:19:29.661 "uuid": "035da501-b864-5a31-ba34-5dadbce1f362", 00:19:29.661 "is_configured": true, 00:19:29.661 "data_offset": 2048, 00:19:29.661 "data_size": 63488 00:19:29.661 }, 00:19:29.661 { 00:19:29.661 "name": "BaseBdev2", 00:19:29.662 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:29.662 "is_configured": true, 00:19:29.662 "data_offset": 2048, 00:19:29.662 "data_size": 63488 00:19:29.662 } 00:19:29.662 ] 00:19:29.662 }' 00:19:29.662 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.662 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.229 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:30.229 10:26:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:30.229 [2024-07-15 10:26:54.998570] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.229 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:19:30.488 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.488 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:30.488 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:19:30.488 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:19:30.488 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:30.488 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:30.746 [2024-07-15 10:26:55.277007] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13ab620 00:19:30.746 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:30.746 Zero copy mechanism will not be used. 00:19:30.746 Running I/O for 60 seconds... 00:19:30.746 [2024-07-15 10:26:55.349349] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.746 [2024-07-15 10:26:55.349520] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x13ab620 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.746 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.005 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.005 "name": "raid_bdev1", 00:19:31.005 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:31.005 "strip_size_kb": 0, 00:19:31.005 "state": "online", 00:19:31.005 "raid_level": "raid1", 00:19:31.005 "superblock": true, 00:19:31.005 "num_base_bdevs": 2, 00:19:31.005 "num_base_bdevs_discovered": 1, 00:19:31.005 "num_base_bdevs_operational": 1, 00:19:31.005 "base_bdevs_list": [ 00:19:31.005 { 00:19:31.005 "name": null, 00:19:31.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.005 "is_configured": false, 00:19:31.005 "data_offset": 2048, 00:19:31.005 "data_size": 63488 00:19:31.005 }, 00:19:31.005 { 00:19:31.005 "name": "BaseBdev2", 00:19:31.005 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:31.005 "is_configured": true, 00:19:31.005 "data_offset": 2048, 00:19:31.005 "data_size": 63488 00:19:31.005 } 00:19:31.005 ] 00:19:31.005 }' 00:19:31.005 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.005 10:26:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.571 10:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.571 [2024-07-15 10:26:56.229883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.571 10:26:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:19:31.571 [2024-07-15 10:26:56.278757] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13a9a60 00:19:31.571 [2024-07-15 10:26:56.280591] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.830 [2024-07-15 10:26:56.398338] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:31.830 [2024-07-15 10:26:56.398599] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:31.830 [2024-07-15 10:26:56.511591] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:31.830 [2024-07-15 10:26:56.511705] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:32.088 [2024-07-15 10:26:56.826908] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:32.347 [2024-07-15 10:26:57.059447] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.605 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.863 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.863 "name": "raid_bdev1", 00:19:32.863 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:32.863 "strip_size_kb": 0, 00:19:32.863 "state": "online", 00:19:32.863 "raid_level": "raid1", 00:19:32.863 "superblock": true, 00:19:32.863 "num_base_bdevs": 2, 00:19:32.863 "num_base_bdevs_discovered": 2, 00:19:32.863 "num_base_bdevs_operational": 2, 00:19:32.863 "process": { 00:19:32.863 "type": "rebuild", 00:19:32.863 "target": "spare", 00:19:32.863 "progress": { 00:19:32.863 "blocks": 14336, 00:19:32.863 "percent": 22 00:19:32.863 } 00:19:32.863 }, 00:19:32.863 "base_bdevs_list": [ 00:19:32.863 { 00:19:32.863 "name": "spare", 00:19:32.863 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 2048, 00:19:32.863 "data_size": 63488 00:19:32.863 }, 00:19:32.863 { 00:19:32.863 "name": "BaseBdev2", 00:19:32.863 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:32.863 "is_configured": true, 00:19:32.863 "data_offset": 2048, 00:19:32.863 "data_size": 63488 00:19:32.863 } 00:19:32.864 ] 00:19:32.864 }' 00:19:32.864 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:32.864 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.864 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:32.864 [2024-07-15 10:26:57.508136] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:32.864 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.864 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:33.122 [2024-07-15 10:26:57.682169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.122 [2024-07-15 10:26:57.850875] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:33.122 [2024-07-15 10:26:57.857669] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.122 [2024-07-15 10:26:57.857688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:33.122 [2024-07-15 10:26:57.857694] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:33.122 [2024-07-15 10:26:57.867619] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x13ab620 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.122 10:26:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.380 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:33.380 "name": "raid_bdev1", 00:19:33.380 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:33.380 "strip_size_kb": 0, 00:19:33.380 "state": "online", 00:19:33.380 "raid_level": "raid1", 00:19:33.380 "superblock": true, 00:19:33.380 "num_base_bdevs": 2, 00:19:33.380 "num_base_bdevs_discovered": 1, 00:19:33.380 "num_base_bdevs_operational": 1, 00:19:33.380 "base_bdevs_list": [ 00:19:33.380 { 00:19:33.380 "name": null, 00:19:33.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.380 "is_configured": false, 00:19:33.380 "data_offset": 2048, 00:19:33.380 "data_size": 63488 00:19:33.380 }, 00:19:33.380 { 00:19:33.380 "name": "BaseBdev2", 00:19:33.380 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:33.380 "is_configured": true, 00:19:33.380 "data_offset": 2048, 00:19:33.380 "data_size": 63488 00:19:33.380 } 00:19:33.380 ] 00:19:33.380 }' 00:19:33.380 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:33.380 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.015 "name": "raid_bdev1", 00:19:34.015 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:34.015 "strip_size_kb": 0, 00:19:34.015 "state": "online", 00:19:34.015 "raid_level": "raid1", 00:19:34.015 "superblock": true, 00:19:34.015 "num_base_bdevs": 2, 00:19:34.015 "num_base_bdevs_discovered": 1, 00:19:34.015 "num_base_bdevs_operational": 1, 00:19:34.015 "base_bdevs_list": [ 00:19:34.015 { 00:19:34.015 "name": null, 00:19:34.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.015 "is_configured": false, 00:19:34.015 "data_offset": 2048, 00:19:34.015 "data_size": 63488 00:19:34.015 }, 00:19:34.015 { 00:19:34.015 "name": "BaseBdev2", 00:19:34.015 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:34.015 "is_configured": true, 00:19:34.015 "data_offset": 2048, 00:19:34.015 "data_size": 63488 00:19:34.015 } 00:19:34.015 ] 00:19:34.015 }' 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:34.015 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:34.271 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:34.271 10:26:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:34.272 [2024-07-15 10:26:58.994291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:34.272 [2024-07-15 10:26:59.027698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13ad1e0 00:19:34.272 [2024-07-15 10:26:59.028746] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:34.272 10:26:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:34.529 [2024-07-15 10:26:59.136045] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:34.529 [2024-07-15 10:26:59.136306] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:34.529 [2024-07-15 10:26:59.248663] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:34.529 [2024-07-15 10:26:59.248775] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:34.786 [2024-07-15 10:26:59.488933] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:35.043 [2024-07-15 10:26:59.702098] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:35.043 [2024-07-15 10:26:59.702288] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.302 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.560 "name": "raid_bdev1", 00:19:35.560 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:35.560 "strip_size_kb": 0, 00:19:35.560 "state": "online", 00:19:35.560 "raid_level": "raid1", 00:19:35.560 "superblock": true, 00:19:35.560 "num_base_bdevs": 2, 00:19:35.560 "num_base_bdevs_discovered": 2, 00:19:35.560 "num_base_bdevs_operational": 2, 00:19:35.560 "process": { 00:19:35.560 "type": "rebuild", 00:19:35.560 "target": "spare", 00:19:35.560 "progress": { 00:19:35.560 "blocks": 16384, 00:19:35.560 "percent": 25 00:19:35.560 } 00:19:35.560 }, 00:19:35.560 "base_bdevs_list": [ 00:19:35.560 { 00:19:35.560 "name": "spare", 00:19:35.560 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:35.560 "is_configured": true, 00:19:35.560 "data_offset": 2048, 00:19:35.560 "data_size": 63488 00:19:35.560 }, 00:19:35.560 { 00:19:35.560 "name": "BaseBdev2", 00:19:35.560 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:35.560 "is_configured": true, 00:19:35.560 "data_offset": 2048, 00:19:35.560 "data_size": 63488 00:19:35.560 } 00:19:35.560 ] 00:19:35.560 }' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:19:35.560 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=641 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.560 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.818 [2024-07-15 10:27:00.358785] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:35.818 [2024-07-15 10:27:00.359171] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:35.818 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.818 "name": "raid_bdev1", 00:19:35.818 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:35.818 "strip_size_kb": 0, 00:19:35.818 "state": "online", 00:19:35.818 "raid_level": "raid1", 00:19:35.818 "superblock": true, 00:19:35.818 "num_base_bdevs": 2, 00:19:35.818 "num_base_bdevs_discovered": 2, 00:19:35.818 "num_base_bdevs_operational": 2, 00:19:35.818 "process": { 00:19:35.818 "type": "rebuild", 00:19:35.818 "target": "spare", 00:19:35.818 "progress": { 00:19:35.818 "blocks": 20480, 00:19:35.818 "percent": 32 00:19:35.818 } 00:19:35.818 }, 00:19:35.818 "base_bdevs_list": [ 00:19:35.818 { 00:19:35.818 "name": "spare", 00:19:35.818 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:35.818 "is_configured": true, 00:19:35.818 "data_offset": 2048, 00:19:35.818 "data_size": 63488 00:19:35.818 }, 00:19:35.818 { 00:19:35.818 "name": "BaseBdev2", 00:19:35.818 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:35.818 "is_configured": true, 00:19:35.818 "data_offset": 2048, 00:19:35.818 "data_size": 63488 00:19:35.818 } 00:19:35.818 ] 00:19:35.819 }' 00:19:35.819 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:35.819 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.819 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:35.819 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.819 10:27:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:35.819 [2024-07-15 10:27:00.574000] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:36.753 [2024-07-15 10:27:01.309507] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.012 "name": "raid_bdev1", 00:19:37.012 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:37.012 "strip_size_kb": 0, 00:19:37.012 "state": "online", 00:19:37.012 "raid_level": "raid1", 00:19:37.012 "superblock": true, 00:19:37.012 "num_base_bdevs": 2, 00:19:37.012 "num_base_bdevs_discovered": 2, 00:19:37.012 "num_base_bdevs_operational": 2, 00:19:37.012 "process": { 00:19:37.012 "type": "rebuild", 00:19:37.012 "target": "spare", 00:19:37.012 "progress": { 00:19:37.012 "blocks": 40960, 00:19:37.012 "percent": 64 00:19:37.012 } 00:19:37.012 }, 00:19:37.012 "base_bdevs_list": [ 00:19:37.012 { 00:19:37.012 "name": "spare", 00:19:37.012 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:37.012 "is_configured": true, 00:19:37.012 "data_offset": 2048, 00:19:37.012 "data_size": 63488 00:19:37.012 }, 00:19:37.012 { 00:19:37.012 "name": "BaseBdev2", 00:19:37.012 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:37.012 "is_configured": true, 00:19:37.012 "data_offset": 2048, 00:19:37.012 "data_size": 63488 00:19:37.012 } 00:19:37.012 ] 00:19:37.012 }' 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.012 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:37.271 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.271 10:27:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:37.271 [2024-07-15 10:27:02.027112] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:37.837 [2024-07-15 10:27:02.345805] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:37.837 [2024-07-15 10:27:02.557962] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.097 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.356 [2024-07-15 10:27:02.983864] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:38.356 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.356 "name": "raid_bdev1", 00:19:38.356 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:38.356 "strip_size_kb": 0, 00:19:38.356 "state": "online", 00:19:38.356 "raid_level": "raid1", 00:19:38.356 "superblock": true, 00:19:38.356 "num_base_bdevs": 2, 00:19:38.356 "num_base_bdevs_discovered": 2, 00:19:38.356 "num_base_bdevs_operational": 2, 00:19:38.356 "process": { 00:19:38.356 "type": "rebuild", 00:19:38.356 "target": "spare", 00:19:38.356 "progress": { 00:19:38.356 "blocks": 61440, 00:19:38.356 "percent": 96 00:19:38.356 } 00:19:38.356 }, 00:19:38.356 "base_bdevs_list": [ 00:19:38.356 { 00:19:38.356 "name": "spare", 00:19:38.356 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:38.356 "is_configured": true, 00:19:38.356 "data_offset": 2048, 00:19:38.356 "data_size": 63488 00:19:38.356 }, 00:19:38.356 { 00:19:38.356 "name": "BaseBdev2", 00:19:38.356 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:38.356 "is_configured": true, 00:19:38.356 "data_offset": 2048, 00:19:38.356 "data_size": 63488 00:19:38.356 } 00:19:38.357 ] 00:19:38.357 }' 00:19:38.357 10:27:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:38.357 10:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.357 10:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:38.357 10:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:38.357 10:27:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:19:38.357 [2024-07-15 10:27:03.084122] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:38.357 [2024-07-15 10:27:03.085573] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.292 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:19:39.292 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.292 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:39.292 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:39.293 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:39.293 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:39.293 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.293 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.551 "name": "raid_bdev1", 00:19:39.551 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:39.551 "strip_size_kb": 0, 00:19:39.551 "state": "online", 00:19:39.551 "raid_level": "raid1", 00:19:39.551 "superblock": true, 00:19:39.551 "num_base_bdevs": 2, 00:19:39.551 "num_base_bdevs_discovered": 2, 00:19:39.551 "num_base_bdevs_operational": 2, 00:19:39.551 "base_bdevs_list": [ 00:19:39.551 { 00:19:39.551 "name": "spare", 00:19:39.551 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:39.551 "is_configured": true, 00:19:39.551 "data_offset": 2048, 00:19:39.551 "data_size": 63488 00:19:39.551 }, 00:19:39.551 { 00:19:39.551 "name": "BaseBdev2", 00:19:39.551 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:39.551 "is_configured": true, 00:19:39.551 "data_offset": 2048, 00:19:39.551 "data_size": 63488 00:19:39.551 } 00:19:39.551 ] 00:19:39.551 }' 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.551 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.809 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.809 "name": "raid_bdev1", 00:19:39.809 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:39.809 "strip_size_kb": 0, 00:19:39.809 "state": "online", 00:19:39.809 "raid_level": "raid1", 00:19:39.809 "superblock": true, 00:19:39.809 "num_base_bdevs": 2, 00:19:39.809 "num_base_bdevs_discovered": 2, 00:19:39.809 "num_base_bdevs_operational": 2, 00:19:39.809 "base_bdevs_list": [ 00:19:39.809 { 00:19:39.809 "name": "spare", 00:19:39.809 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:39.809 "is_configured": true, 00:19:39.809 "data_offset": 2048, 00:19:39.809 "data_size": 63488 00:19:39.809 }, 00:19:39.809 { 00:19:39.809 "name": "BaseBdev2", 00:19:39.809 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:39.809 "is_configured": true, 00:19:39.809 "data_offset": 2048, 00:19:39.809 "data_size": 63488 00:19:39.809 } 00:19:39.809 ] 00:19:39.809 }' 00:19:39.809 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.810 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.068 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.068 "name": "raid_bdev1", 00:19:40.068 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:40.068 "strip_size_kb": 0, 00:19:40.068 "state": "online", 00:19:40.068 "raid_level": "raid1", 00:19:40.068 "superblock": true, 00:19:40.068 "num_base_bdevs": 2, 00:19:40.068 "num_base_bdevs_discovered": 2, 00:19:40.068 "num_base_bdevs_operational": 2, 00:19:40.068 "base_bdevs_list": [ 00:19:40.068 { 00:19:40.068 "name": "spare", 00:19:40.068 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:40.068 "is_configured": true, 00:19:40.068 "data_offset": 2048, 00:19:40.068 "data_size": 63488 00:19:40.068 }, 00:19:40.068 { 00:19:40.068 "name": "BaseBdev2", 00:19:40.068 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:40.068 "is_configured": true, 00:19:40.068 "data_offset": 2048, 00:19:40.068 "data_size": 63488 00:19:40.068 } 00:19:40.068 ] 00:19:40.068 }' 00:19:40.068 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.068 10:27:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.636 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:40.636 [2024-07-15 10:27:05.394626] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.636 [2024-07-15 10:27:05.394650] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.895 00:19:40.895 Latency(us) 00:19:40.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.895 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:40.895 raid_bdev1 : 10.13 126.34 379.02 0.00 0.00 11024.64 237.57 111568.49 00:19:40.895 =================================================================================================================== 00:19:40.895 Total : 126.34 379.02 0.00 0.00 11024.64 237.57 111568.49 00:19:40.895 [2024-07-15 10:27:05.437407] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.895 [2024-07-15 10:27:05.437442] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.895 [2024-07-15 10:27:05.437491] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.896 [2024-07-15 10:27:05.437499] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x120a270 name raid_bdev1, state offline 00:19:40.896 0 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.896 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:41.155 /dev/nbd0 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.155 1+0 records in 00:19:41.155 1+0 records out 00:19:41.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265358 s, 15.4 MB/s 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.155 10:27:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:41.414 /dev/nbd1 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.414 1+0 records in 00:19:41.414 1+0 records out 00:19:41.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269201 s, 15.2 MB/s 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.414 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.673 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:41.931 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:42.189 [2024-07-15 10:27:06.795122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:42.189 [2024-07-15 10:27:06.795154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.189 [2024-07-15 10:27:06.795167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1210430 00:19:42.189 [2024-07-15 10:27:06.795192] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.189 [2024-07-15 10:27:06.796357] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.189 [2024-07-15 10:27:06.796380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:42.189 [2024-07-15 10:27:06.796436] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:42.189 [2024-07-15 10:27:06.796455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.189 [2024-07-15 10:27:06.796524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.189 spare 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.189 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.189 [2024-07-15 10:27:06.896815] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x120c2b0 00:19:42.189 [2024-07-15 10:27:06.896825] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:42.189 [2024-07-15 10:27:06.896945] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13c4cc0 00:19:42.189 [2024-07-15 10:27:06.897038] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x120c2b0 00:19:42.189 [2024-07-15 10:27:06.897044] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x120c2b0 00:19:42.189 [2024-07-15 10:27:06.897110] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.447 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.447 "name": "raid_bdev1", 00:19:42.447 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:42.447 "strip_size_kb": 0, 00:19:42.447 "state": "online", 00:19:42.447 "raid_level": "raid1", 00:19:42.447 "superblock": true, 00:19:42.447 "num_base_bdevs": 2, 00:19:42.447 "num_base_bdevs_discovered": 2, 00:19:42.447 "num_base_bdevs_operational": 2, 00:19:42.447 "base_bdevs_list": [ 00:19:42.447 { 00:19:42.447 "name": "spare", 00:19:42.447 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:42.447 "is_configured": true, 00:19:42.447 "data_offset": 2048, 00:19:42.447 "data_size": 63488 00:19:42.447 }, 00:19:42.447 { 00:19:42.447 "name": "BaseBdev2", 00:19:42.447 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:42.447 "is_configured": true, 00:19:42.447 "data_offset": 2048, 00:19:42.447 "data_size": 63488 00:19:42.447 } 00:19:42.447 ] 00:19:42.447 }' 00:19:42.447 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.447 10:27:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.705 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.962 "name": "raid_bdev1", 00:19:42.962 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:42.962 "strip_size_kb": 0, 00:19:42.962 "state": "online", 00:19:42.962 "raid_level": "raid1", 00:19:42.962 "superblock": true, 00:19:42.962 "num_base_bdevs": 2, 00:19:42.962 "num_base_bdevs_discovered": 2, 00:19:42.962 "num_base_bdevs_operational": 2, 00:19:42.962 "base_bdevs_list": [ 00:19:42.962 { 00:19:42.962 "name": "spare", 00:19:42.962 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:42.962 "is_configured": true, 00:19:42.962 "data_offset": 2048, 00:19:42.962 "data_size": 63488 00:19:42.962 }, 00:19:42.962 { 00:19:42.962 "name": "BaseBdev2", 00:19:42.962 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:42.962 "is_configured": true, 00:19:42.962 "data_offset": 2048, 00:19:42.962 "data_size": 63488 00:19:42.962 } 00:19:42.962 ] 00:19:42.962 }' 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:42.962 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.219 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.219 10:27:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:43.476 [2024-07-15 10:27:08.046506] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.476 "name": "raid_bdev1", 00:19:43.476 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:43.476 "strip_size_kb": 0, 00:19:43.476 "state": "online", 00:19:43.476 "raid_level": "raid1", 00:19:43.476 "superblock": true, 00:19:43.476 "num_base_bdevs": 2, 00:19:43.476 "num_base_bdevs_discovered": 1, 00:19:43.476 "num_base_bdevs_operational": 1, 00:19:43.476 "base_bdevs_list": [ 00:19:43.476 { 00:19:43.476 "name": null, 00:19:43.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.476 "is_configured": false, 00:19:43.476 "data_offset": 2048, 00:19:43.476 "data_size": 63488 00:19:43.476 }, 00:19:43.476 { 00:19:43.476 "name": "BaseBdev2", 00:19:43.476 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:43.476 "is_configured": true, 00:19:43.476 "data_offset": 2048, 00:19:43.476 "data_size": 63488 00:19:43.476 } 00:19:43.476 ] 00:19:43.476 }' 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.476 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:44.047 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.304 [2024-07-15 10:27:08.860675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.304 [2024-07-15 10:27:08.860786] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:44.304 [2024-07-15 10:27:08.860798] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:44.304 [2024-07-15 10:27:08.860818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.304 [2024-07-15 10:27:08.865513] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13ab000 00:19:44.304 [2024-07-15 10:27:08.867162] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.304 10:27:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.236 10:27:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.494 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:45.494 "name": "raid_bdev1", 00:19:45.494 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:45.494 "strip_size_kb": 0, 00:19:45.494 "state": "online", 00:19:45.494 "raid_level": "raid1", 00:19:45.494 "superblock": true, 00:19:45.494 "num_base_bdevs": 2, 00:19:45.494 "num_base_bdevs_discovered": 2, 00:19:45.494 "num_base_bdevs_operational": 2, 00:19:45.494 "process": { 00:19:45.494 "type": "rebuild", 00:19:45.494 "target": "spare", 00:19:45.494 "progress": { 00:19:45.494 "blocks": 22528, 00:19:45.494 "percent": 35 00:19:45.495 } 00:19:45.495 }, 00:19:45.495 "base_bdevs_list": [ 00:19:45.495 { 00:19:45.495 "name": "spare", 00:19:45.495 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:45.495 "is_configured": true, 00:19:45.495 "data_offset": 2048, 00:19:45.495 "data_size": 63488 00:19:45.495 }, 00:19:45.495 { 00:19:45.495 "name": "BaseBdev2", 00:19:45.495 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:45.495 "is_configured": true, 00:19:45.495 "data_offset": 2048, 00:19:45.495 "data_size": 63488 00:19:45.495 } 00:19:45.495 ] 00:19:45.495 }' 00:19:45.495 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:45.495 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.495 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:45.495 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.495 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:45.754 [2024-07-15 10:27:10.305444] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.754 [2024-07-15 10:27:10.377569] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:45.754 [2024-07-15 10:27:10.377606] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.754 [2024-07-15 10:27:10.377632] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:45.754 [2024-07-15 10:27:10.377638] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.754 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.012 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.012 "name": "raid_bdev1", 00:19:46.012 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:46.012 "strip_size_kb": 0, 00:19:46.012 "state": "online", 00:19:46.012 "raid_level": "raid1", 00:19:46.012 "superblock": true, 00:19:46.012 "num_base_bdevs": 2, 00:19:46.012 "num_base_bdevs_discovered": 1, 00:19:46.012 "num_base_bdevs_operational": 1, 00:19:46.012 "base_bdevs_list": [ 00:19:46.012 { 00:19:46.012 "name": null, 00:19:46.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.012 "is_configured": false, 00:19:46.012 "data_offset": 2048, 00:19:46.012 "data_size": 63488 00:19:46.012 }, 00:19:46.012 { 00:19:46.012 "name": "BaseBdev2", 00:19:46.012 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:46.012 "is_configured": true, 00:19:46.012 "data_offset": 2048, 00:19:46.012 "data_size": 63488 00:19:46.012 } 00:19:46.012 ] 00:19:46.012 }' 00:19:46.012 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.012 10:27:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.269 10:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:46.575 [2024-07-15 10:27:11.195972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.575 [2024-07-15 10:27:11.196006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.575 [2024-07-15 10:27:11.196037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x120ef00 00:19:46.575 [2024-07-15 10:27:11.196046] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.575 [2024-07-15 10:27:11.196304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.575 [2024-07-15 10:27:11.196316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.575 [2024-07-15 10:27:11.196372] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:46.575 [2024-07-15 10:27:11.196380] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:46.575 [2024-07-15 10:27:11.196387] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:46.575 [2024-07-15 10:27:11.196404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.575 [2024-07-15 10:27:11.201186] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13ad2b0 00:19:46.575 spare 00:19:46.575 [2024-07-15 10:27:11.202224] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:46.575 10:27:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.533 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.792 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.792 "name": "raid_bdev1", 00:19:47.792 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:47.792 "strip_size_kb": 0, 00:19:47.792 "state": "online", 00:19:47.792 "raid_level": "raid1", 00:19:47.792 "superblock": true, 00:19:47.792 "num_base_bdevs": 2, 00:19:47.792 "num_base_bdevs_discovered": 2, 00:19:47.792 "num_base_bdevs_operational": 2, 00:19:47.792 "process": { 00:19:47.792 "type": "rebuild", 00:19:47.792 "target": "spare", 00:19:47.792 "progress": { 00:19:47.792 "blocks": 22528, 00:19:47.792 "percent": 35 00:19:47.792 } 00:19:47.792 }, 00:19:47.792 "base_bdevs_list": [ 00:19:47.792 { 00:19:47.792 "name": "spare", 00:19:47.792 "uuid": "1bf6c60d-6664-5b56-947f-b6a706264f55", 00:19:47.792 "is_configured": true, 00:19:47.792 "data_offset": 2048, 00:19:47.792 "data_size": 63488 00:19:47.792 }, 00:19:47.792 { 00:19:47.792 "name": "BaseBdev2", 00:19:47.792 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:47.792 "is_configured": true, 00:19:47.792 "data_offset": 2048, 00:19:47.792 "data_size": 63488 00:19:47.792 } 00:19:47.792 ] 00:19:47.792 }' 00:19:47.792 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:47.792 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.792 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:47.792 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.792 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:48.050 [2024-07-15 10:27:12.644913] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.050 [2024-07-15 10:27:12.712601] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.050 [2024-07-15 10:27:12.712631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.050 [2024-07-15 10:27:12.712641] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.050 [2024-07-15 10:27:12.712646] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.050 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.313 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.313 "name": "raid_bdev1", 00:19:48.313 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:48.313 "strip_size_kb": 0, 00:19:48.313 "state": "online", 00:19:48.313 "raid_level": "raid1", 00:19:48.313 "superblock": true, 00:19:48.313 "num_base_bdevs": 2, 00:19:48.313 "num_base_bdevs_discovered": 1, 00:19:48.313 "num_base_bdevs_operational": 1, 00:19:48.313 "base_bdevs_list": [ 00:19:48.313 { 00:19:48.313 "name": null, 00:19:48.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.313 "is_configured": false, 00:19:48.313 "data_offset": 2048, 00:19:48.313 "data_size": 63488 00:19:48.313 }, 00:19:48.313 { 00:19:48.313 "name": "BaseBdev2", 00:19:48.313 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:48.313 "is_configured": true, 00:19:48.313 "data_offset": 2048, 00:19:48.313 "data_size": 63488 00:19:48.313 } 00:19:48.313 ] 00:19:48.313 }' 00:19:48.313 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.313 10:27:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:48.884 "name": "raid_bdev1", 00:19:48.884 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:48.884 "strip_size_kb": 0, 00:19:48.884 "state": "online", 00:19:48.884 "raid_level": "raid1", 00:19:48.884 "superblock": true, 00:19:48.884 "num_base_bdevs": 2, 00:19:48.884 "num_base_bdevs_discovered": 1, 00:19:48.884 "num_base_bdevs_operational": 1, 00:19:48.884 "base_bdevs_list": [ 00:19:48.884 { 00:19:48.884 "name": null, 00:19:48.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.884 "is_configured": false, 00:19:48.884 "data_offset": 2048, 00:19:48.884 "data_size": 63488 00:19:48.884 }, 00:19:48.884 { 00:19:48.884 "name": "BaseBdev2", 00:19:48.884 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:48.884 "is_configured": true, 00:19:48.884 "data_offset": 2048, 00:19:48.884 "data_size": 63488 00:19:48.884 } 00:19:48.884 ] 00:19:48.884 }' 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:48.884 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:49.143 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:49.143 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:49.143 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:49.402 [2024-07-15 10:27:13.980194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:49.402 [2024-07-15 10:27:13.980227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.402 [2024-07-15 10:27:13.980243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x120fe70 00:19:49.402 [2024-07-15 10:27:13.980273] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.402 [2024-07-15 10:27:13.980526] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.402 [2024-07-15 10:27:13.980538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:49.402 [2024-07-15 10:27:13.980583] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:49.402 [2024-07-15 10:27:13.980591] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:49.402 [2024-07-15 10:27:13.980598] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:49.402 BaseBdev1 00:19:49.402 10:27:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:50.339 10:27:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.339 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.598 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.598 "name": "raid_bdev1", 00:19:50.598 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:50.598 "strip_size_kb": 0, 00:19:50.598 "state": "online", 00:19:50.598 "raid_level": "raid1", 00:19:50.598 "superblock": true, 00:19:50.598 "num_base_bdevs": 2, 00:19:50.598 "num_base_bdevs_discovered": 1, 00:19:50.598 "num_base_bdevs_operational": 1, 00:19:50.598 "base_bdevs_list": [ 00:19:50.598 { 00:19:50.598 "name": null, 00:19:50.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.598 "is_configured": false, 00:19:50.598 "data_offset": 2048, 00:19:50.598 "data_size": 63488 00:19:50.598 }, 00:19:50.598 { 00:19:50.598 "name": "BaseBdev2", 00:19:50.598 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:50.598 "is_configured": true, 00:19:50.598 "data_offset": 2048, 00:19:50.598 "data_size": 63488 00:19:50.598 } 00:19:50.598 ] 00:19:50.598 }' 00:19:50.598 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.598 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.857 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:51.117 "name": "raid_bdev1", 00:19:51.117 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:51.117 "strip_size_kb": 0, 00:19:51.117 "state": "online", 00:19:51.117 "raid_level": "raid1", 00:19:51.117 "superblock": true, 00:19:51.117 "num_base_bdevs": 2, 00:19:51.117 "num_base_bdevs_discovered": 1, 00:19:51.117 "num_base_bdevs_operational": 1, 00:19:51.117 "base_bdevs_list": [ 00:19:51.117 { 00:19:51.117 "name": null, 00:19:51.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.117 "is_configured": false, 00:19:51.117 "data_offset": 2048, 00:19:51.117 "data_size": 63488 00:19:51.117 }, 00:19:51.117 { 00:19:51.117 "name": "BaseBdev2", 00:19:51.117 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:51.117 "is_configured": true, 00:19:51.117 "data_offset": 2048, 00:19:51.117 "data_size": 63488 00:19:51.117 } 00:19:51.117 ] 00:19:51.117 }' 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:51.117 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:51.377 10:27:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:51.377 [2024-07-15 10:27:16.061832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.377 [2024-07-15 10:27:16.061941] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:51.377 [2024-07-15 10:27:16.061952] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:51.377 request: 00:19:51.377 { 00:19:51.377 "base_bdev": "BaseBdev1", 00:19:51.377 "raid_bdev": "raid_bdev1", 00:19:51.377 "method": "bdev_raid_add_base_bdev", 00:19:51.377 "req_id": 1 00:19:51.377 } 00:19:51.377 Got JSON-RPC error response 00:19:51.377 response: 00:19:51.377 { 00:19:51.377 "code": -22, 00:19:51.377 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:51.377 } 00:19:51.377 10:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:19:51.377 10:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.377 10:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.377 10:27:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.377 10:27:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.314 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.573 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:52.573 "name": "raid_bdev1", 00:19:52.573 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:52.573 "strip_size_kb": 0, 00:19:52.573 "state": "online", 00:19:52.573 "raid_level": "raid1", 00:19:52.573 "superblock": true, 00:19:52.573 "num_base_bdevs": 2, 00:19:52.573 "num_base_bdevs_discovered": 1, 00:19:52.573 "num_base_bdevs_operational": 1, 00:19:52.574 "base_bdevs_list": [ 00:19:52.574 { 00:19:52.574 "name": null, 00:19:52.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.574 "is_configured": false, 00:19:52.574 "data_offset": 2048, 00:19:52.574 "data_size": 63488 00:19:52.574 }, 00:19:52.574 { 00:19:52.574 "name": "BaseBdev2", 00:19:52.574 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:52.574 "is_configured": true, 00:19:52.574 "data_offset": 2048, 00:19:52.574 "data_size": 63488 00:19:52.574 } 00:19:52.574 ] 00:19:52.574 }' 00:19:52.574 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:52.574 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:53.142 "name": "raid_bdev1", 00:19:53.142 "uuid": "1bcd598e-fe0d-4c7b-9383-a921a4f9d99d", 00:19:53.142 "strip_size_kb": 0, 00:19:53.142 "state": "online", 00:19:53.142 "raid_level": "raid1", 00:19:53.142 "superblock": true, 00:19:53.142 "num_base_bdevs": 2, 00:19:53.142 "num_base_bdevs_discovered": 1, 00:19:53.142 "num_base_bdevs_operational": 1, 00:19:53.142 "base_bdevs_list": [ 00:19:53.142 { 00:19:53.142 "name": null, 00:19:53.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.142 "is_configured": false, 00:19:53.142 "data_offset": 2048, 00:19:53.142 "data_size": 63488 00:19:53.142 }, 00:19:53.142 { 00:19:53.142 "name": "BaseBdev2", 00:19:53.142 "uuid": "13ac02a8-6f2d-5f72-b911-9ea62a4026b2", 00:19:53.142 "is_configured": true, 00:19:53.142 "data_offset": 2048, 00:19:53.142 "data_size": 63488 00:19:53.142 } 00:19:53.142 ] 00:19:53.142 }' 00:19:53.142 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:19:53.402 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:19:53.402 10:27:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 1859785 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 1859785 ']' 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 1859785 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1859785 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1859785' 00:19:53.402 killing process with pid 1859785 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 1859785 00:19:53.402 Received shutdown signal, test time was about 22.711432 seconds 00:19:53.402 00:19:53.402 Latency(us) 00:19:53.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.402 =================================================================================================================== 00:19:53.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.402 [2024-07-15 10:27:18.045438] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:53.402 [2024-07-15 10:27:18.045504] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:53.402 [2024-07-15 10:27:18.045535] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:53.402 [2024-07-15 10:27:18.045543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x120c2b0 name raid_bdev1, state offline 00:19:53.402 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 1859785 00:19:53.402 [2024-07-15 10:27:18.063731] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:53.662 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:19:53.662 00:19:53.662 real 0m26.286s 00:19:53.662 user 0m39.564s 00:19:53.662 sys 0m3.665s 00:19:53.662 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:53.662 10:27:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:53.662 ************************************ 00:19:53.662 END TEST raid_rebuild_test_sb_io 00:19:53.662 ************************************ 00:19:53.662 10:27:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:53.662 10:27:18 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:19:53.662 10:27:18 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:19:53.662 10:27:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:19:53.662 10:27:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.662 10:27:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:53.662 ************************************ 00:19:53.662 START TEST raid_rebuild_test 00:19:53.662 ************************************ 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=1865134 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 1865134 /var/tmp/spdk-raid.sock 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 1865134 ']' 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:53.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.663 10:27:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.663 [2024-07-15 10:27:18.389952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:53.663 [2024-07-15 10:27:18.389999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865134 ] 00:19:53.663 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:53.663 Zero copy mechanism will not be used. 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:53.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:53.663 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:53.923 [2024-07-15 10:27:18.479233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.923 [2024-07-15 10:27:18.549721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.923 [2024-07-15 10:27:18.612248] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.923 [2024-07-15 10:27:18.612274] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:54.491 10:27:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.491 10:27:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:19:54.491 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:54.491 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:54.751 BaseBdev1_malloc 00:19:54.751 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:54.751 [2024-07-15 10:27:19.520618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:54.751 [2024-07-15 10:27:19.520655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.751 [2024-07-15 10:27:19.520673] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfbe5f0 00:19:54.751 [2024-07-15 10:27:19.520682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.751 [2024-07-15 10:27:19.521794] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.751 [2024-07-15 10:27:19.521816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.751 BaseBdev1 00:19:54.751 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:54.751 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:55.037 BaseBdev2_malloc 00:19:55.037 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:55.295 [2024-07-15 10:27:19.877016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:55.295 [2024-07-15 10:27:19.877053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.295 [2024-07-15 10:27:19.877068] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1162130 00:19:55.296 [2024-07-15 10:27:19.877077] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.296 [2024-07-15 10:27:19.878153] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.296 [2024-07-15 10:27:19.878175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:55.296 BaseBdev2 00:19:55.296 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:55.296 10:27:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:55.296 BaseBdev3_malloc 00:19:55.296 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:55.553 [2024-07-15 10:27:20.221692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:55.553 [2024-07-15 10:27:20.221729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.553 [2024-07-15 10:27:20.221746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1158420 00:19:55.553 [2024-07-15 10:27:20.221755] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.553 [2024-07-15 10:27:20.222837] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.553 [2024-07-15 10:27:20.222859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:55.553 BaseBdev3 00:19:55.553 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:19:55.553 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:55.821 BaseBdev4_malloc 00:19:55.821 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:55.821 [2024-07-15 10:27:20.574542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:55.821 [2024-07-15 10:27:20.574576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.821 [2024-07-15 10:27:20.574609] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1158d40 00:19:55.821 [2024-07-15 10:27:20.574617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.821 [2024-07-15 10:27:20.575635] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.821 [2024-07-15 10:27:20.575660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:55.821 BaseBdev4 00:19:55.821 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:56.082 spare_malloc 00:19:56.082 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:56.340 spare_delay 00:19:56.340 10:27:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:56.340 [2024-07-15 10:27:21.079427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:56.340 [2024-07-15 10:27:21.079460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.340 [2024-07-15 10:27:21.079475] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfb7db0 00:19:56.340 [2024-07-15 10:27:21.079499] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.340 [2024-07-15 10:27:21.080523] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.340 [2024-07-15 10:27:21.080546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:56.340 spare 00:19:56.340 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:56.599 [2024-07-15 10:27:21.235841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.599 [2024-07-15 10:27:21.236647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:56.599 [2024-07-15 10:27:21.236682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:56.599 [2024-07-15 10:27:21.236710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:56.599 [2024-07-15 10:27:21.236760] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfba5b0 00:19:56.599 [2024-07-15 10:27:21.236766] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:56.599 [2024-07-15 10:27:21.236899] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfbd380 00:19:56.599 [2024-07-15 10:27:21.237003] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfba5b0 00:19:56.599 [2024-07-15 10:27:21.237010] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfba5b0 00:19:56.599 [2024-07-15 10:27:21.237082] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.599 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.866 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.866 "name": "raid_bdev1", 00:19:56.866 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:19:56.866 "strip_size_kb": 0, 00:19:56.866 "state": "online", 00:19:56.866 "raid_level": "raid1", 00:19:56.866 "superblock": false, 00:19:56.866 "num_base_bdevs": 4, 00:19:56.866 "num_base_bdevs_discovered": 4, 00:19:56.866 "num_base_bdevs_operational": 4, 00:19:56.866 "base_bdevs_list": [ 00:19:56.866 { 00:19:56.866 "name": "BaseBdev1", 00:19:56.866 "uuid": "9c3cd389-460e-5090-930a-72ab65292c26", 00:19:56.866 "is_configured": true, 00:19:56.866 "data_offset": 0, 00:19:56.866 "data_size": 65536 00:19:56.866 }, 00:19:56.866 { 00:19:56.866 "name": "BaseBdev2", 00:19:56.866 "uuid": "2ebf880d-fe34-5f04-ba30-de4d968aafef", 00:19:56.866 "is_configured": true, 00:19:56.866 "data_offset": 0, 00:19:56.866 "data_size": 65536 00:19:56.866 }, 00:19:56.866 { 00:19:56.866 "name": "BaseBdev3", 00:19:56.866 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:19:56.866 "is_configured": true, 00:19:56.866 "data_offset": 0, 00:19:56.866 "data_size": 65536 00:19:56.866 }, 00:19:56.866 { 00:19:56.866 "name": "BaseBdev4", 00:19:56.866 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:19:56.866 "is_configured": true, 00:19:56.866 "data_offset": 0, 00:19:56.866 "data_size": 65536 00:19:56.866 } 00:19:56.866 ] 00:19:56.866 }' 00:19:56.866 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.866 10:27:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.125 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:57.125 10:27:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:19:57.383 [2024-07-15 10:27:22.062183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:57.383 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:19:57.383 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.383 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:57.642 [2024-07-15 10:27:22.394864] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfba080 00:19:57.642 /dev/nbd0 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:57.642 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.901 1+0 records in 00:19:57.901 1+0 records out 00:19:57.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237823 s, 17.2 MB/s 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:19:57.901 10:27:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:03.245 65536+0 records in 00:20:03.245 65536+0 records out 00:20:03.245 33554432 bytes (34 MB, 32 MiB) copied, 4.79291 s, 7.0 MB/s 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.245 [2024-07-15 10:27:27.437460] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:03.245 [2024-07-15 10:27:27.577824] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:03.245 "name": "raid_bdev1", 00:20:03.245 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:03.245 "strip_size_kb": 0, 00:20:03.245 "state": "online", 00:20:03.245 "raid_level": "raid1", 00:20:03.245 "superblock": false, 00:20:03.245 "num_base_bdevs": 4, 00:20:03.245 "num_base_bdevs_discovered": 3, 00:20:03.245 "num_base_bdevs_operational": 3, 00:20:03.245 "base_bdevs_list": [ 00:20:03.245 { 00:20:03.245 "name": null, 00:20:03.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.245 "is_configured": false, 00:20:03.245 "data_offset": 0, 00:20:03.245 "data_size": 65536 00:20:03.245 }, 00:20:03.245 { 00:20:03.245 "name": "BaseBdev2", 00:20:03.245 "uuid": "2ebf880d-fe34-5f04-ba30-de4d968aafef", 00:20:03.245 "is_configured": true, 00:20:03.245 "data_offset": 0, 00:20:03.245 "data_size": 65536 00:20:03.245 }, 00:20:03.245 { 00:20:03.245 "name": "BaseBdev3", 00:20:03.245 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:03.245 "is_configured": true, 00:20:03.245 "data_offset": 0, 00:20:03.245 "data_size": 65536 00:20:03.245 }, 00:20:03.245 { 00:20:03.245 "name": "BaseBdev4", 00:20:03.245 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:03.245 "is_configured": true, 00:20:03.245 "data_offset": 0, 00:20:03.245 "data_size": 65536 00:20:03.245 } 00:20:03.245 ] 00:20:03.245 }' 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:03.245 10:27:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.505 10:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:03.764 [2024-07-15 10:27:28.419984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:03.764 [2024-07-15 10:27:28.423538] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfbd4a0 00:20:03.764 [2024-07-15 10:27:28.425028] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:03.764 10:27:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.699 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.957 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.957 "name": "raid_bdev1", 00:20:04.957 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:04.957 "strip_size_kb": 0, 00:20:04.957 "state": "online", 00:20:04.957 "raid_level": "raid1", 00:20:04.957 "superblock": false, 00:20:04.957 "num_base_bdevs": 4, 00:20:04.957 "num_base_bdevs_discovered": 4, 00:20:04.957 "num_base_bdevs_operational": 4, 00:20:04.957 "process": { 00:20:04.957 "type": "rebuild", 00:20:04.957 "target": "spare", 00:20:04.957 "progress": { 00:20:04.957 "blocks": 22528, 00:20:04.957 "percent": 34 00:20:04.957 } 00:20:04.957 }, 00:20:04.957 "base_bdevs_list": [ 00:20:04.957 { 00:20:04.957 "name": "spare", 00:20:04.957 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:04.957 "is_configured": true, 00:20:04.957 "data_offset": 0, 00:20:04.957 "data_size": 65536 00:20:04.957 }, 00:20:04.957 { 00:20:04.957 "name": "BaseBdev2", 00:20:04.957 "uuid": "2ebf880d-fe34-5f04-ba30-de4d968aafef", 00:20:04.957 "is_configured": true, 00:20:04.957 "data_offset": 0, 00:20:04.957 "data_size": 65536 00:20:04.957 }, 00:20:04.957 { 00:20:04.957 "name": "BaseBdev3", 00:20:04.957 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:04.957 "is_configured": true, 00:20:04.957 "data_offset": 0, 00:20:04.957 "data_size": 65536 00:20:04.957 }, 00:20:04.957 { 00:20:04.957 "name": "BaseBdev4", 00:20:04.957 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:04.957 "is_configured": true, 00:20:04.957 "data_offset": 0, 00:20:04.957 "data_size": 65536 00:20:04.957 } 00:20:04.957 ] 00:20:04.957 }' 00:20:04.957 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:04.957 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.957 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:04.957 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.957 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:05.216 [2024-07-15 10:27:29.861209] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.216 [2024-07-15 10:27:29.935418] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:05.216 [2024-07-15 10:27:29.935450] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.216 [2024-07-15 10:27:29.935460] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.216 [2024-07-15 10:27:29.935465] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.216 10:27:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.474 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:05.474 "name": "raid_bdev1", 00:20:05.474 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:05.474 "strip_size_kb": 0, 00:20:05.474 "state": "online", 00:20:05.474 "raid_level": "raid1", 00:20:05.474 "superblock": false, 00:20:05.474 "num_base_bdevs": 4, 00:20:05.474 "num_base_bdevs_discovered": 3, 00:20:05.474 "num_base_bdevs_operational": 3, 00:20:05.474 "base_bdevs_list": [ 00:20:05.474 { 00:20:05.474 "name": null, 00:20:05.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.474 "is_configured": false, 00:20:05.474 "data_offset": 0, 00:20:05.474 "data_size": 65536 00:20:05.474 }, 00:20:05.474 { 00:20:05.474 "name": "BaseBdev2", 00:20:05.474 "uuid": "2ebf880d-fe34-5f04-ba30-de4d968aafef", 00:20:05.474 "is_configured": true, 00:20:05.474 "data_offset": 0, 00:20:05.474 "data_size": 65536 00:20:05.474 }, 00:20:05.474 { 00:20:05.474 "name": "BaseBdev3", 00:20:05.474 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:05.474 "is_configured": true, 00:20:05.474 "data_offset": 0, 00:20:05.474 "data_size": 65536 00:20:05.474 }, 00:20:05.474 { 00:20:05.474 "name": "BaseBdev4", 00:20:05.474 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:05.474 "is_configured": true, 00:20:05.474 "data_offset": 0, 00:20:05.474 "data_size": 65536 00:20:05.474 } 00:20:05.474 ] 00:20:05.474 }' 00:20:05.474 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:05.474 10:27:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.041 "name": "raid_bdev1", 00:20:06.041 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:06.041 "strip_size_kb": 0, 00:20:06.041 "state": "online", 00:20:06.041 "raid_level": "raid1", 00:20:06.041 "superblock": false, 00:20:06.041 "num_base_bdevs": 4, 00:20:06.041 "num_base_bdevs_discovered": 3, 00:20:06.041 "num_base_bdevs_operational": 3, 00:20:06.041 "base_bdevs_list": [ 00:20:06.041 { 00:20:06.041 "name": null, 00:20:06.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.041 "is_configured": false, 00:20:06.041 "data_offset": 0, 00:20:06.041 "data_size": 65536 00:20:06.041 }, 00:20:06.041 { 00:20:06.041 "name": "BaseBdev2", 00:20:06.041 "uuid": "2ebf880d-fe34-5f04-ba30-de4d968aafef", 00:20:06.041 "is_configured": true, 00:20:06.041 "data_offset": 0, 00:20:06.041 "data_size": 65536 00:20:06.041 }, 00:20:06.041 { 00:20:06.041 "name": "BaseBdev3", 00:20:06.041 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:06.041 "is_configured": true, 00:20:06.041 "data_offset": 0, 00:20:06.041 "data_size": 65536 00:20:06.041 }, 00:20:06.041 { 00:20:06.041 "name": "BaseBdev4", 00:20:06.041 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:06.041 "is_configured": true, 00:20:06.041 "data_offset": 0, 00:20:06.041 "data_size": 65536 00:20:06.041 } 00:20:06.041 ] 00:20:06.041 }' 00:20:06.041 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:06.300 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:06.300 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:06.300 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:06.300 10:27:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:06.300 [2024-07-15 10:27:31.017909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:06.300 [2024-07-15 10:27:31.021510] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10558a0 00:20:06.300 [2024-07-15 10:27:31.022574] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.300 10:27:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.675 "name": "raid_bdev1", 00:20:07.675 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:07.675 "strip_size_kb": 0, 00:20:07.675 "state": "online", 00:20:07.675 "raid_level": "raid1", 00:20:07.675 "superblock": false, 00:20:07.675 "num_base_bdevs": 4, 00:20:07.675 "num_base_bdevs_discovered": 4, 00:20:07.675 "num_base_bdevs_operational": 4, 00:20:07.675 "process": { 00:20:07.675 "type": "rebuild", 00:20:07.675 "target": "spare", 00:20:07.675 "progress": { 00:20:07.675 "blocks": 22528, 00:20:07.675 "percent": 34 00:20:07.675 } 00:20:07.675 }, 00:20:07.675 "base_bdevs_list": [ 00:20:07.675 { 00:20:07.675 "name": "spare", 00:20:07.675 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:07.675 "is_configured": true, 00:20:07.675 "data_offset": 0, 00:20:07.675 "data_size": 65536 00:20:07.675 }, 00:20:07.675 { 00:20:07.675 "name": "BaseBdev2", 00:20:07.675 "uuid": "2ebf880d-fe34-5f04-ba30-de4d968aafef", 00:20:07.675 "is_configured": true, 00:20:07.675 "data_offset": 0, 00:20:07.675 "data_size": 65536 00:20:07.675 }, 00:20:07.675 { 00:20:07.675 "name": "BaseBdev3", 00:20:07.675 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:07.675 "is_configured": true, 00:20:07.675 "data_offset": 0, 00:20:07.675 "data_size": 65536 00:20:07.675 }, 00:20:07.675 { 00:20:07.675 "name": "BaseBdev4", 00:20:07.675 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:07.675 "is_configured": true, 00:20:07.675 "data_offset": 0, 00:20:07.675 "data_size": 65536 00:20:07.675 } 00:20:07.675 ] 00:20:07.675 }' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:07.675 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:07.675 [2024-07-15 10:27:32.458716] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:07.934 [2024-07-15 10:27:32.532890] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x10558a0 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.934 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.192 "name": "raid_bdev1", 00:20:08.192 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:08.192 "strip_size_kb": 0, 00:20:08.192 "state": "online", 00:20:08.192 "raid_level": "raid1", 00:20:08.192 "superblock": false, 00:20:08.192 "num_base_bdevs": 4, 00:20:08.192 "num_base_bdevs_discovered": 3, 00:20:08.192 "num_base_bdevs_operational": 3, 00:20:08.192 "process": { 00:20:08.192 "type": "rebuild", 00:20:08.192 "target": "spare", 00:20:08.192 "progress": { 00:20:08.192 "blocks": 32768, 00:20:08.192 "percent": 50 00:20:08.192 } 00:20:08.192 }, 00:20:08.192 "base_bdevs_list": [ 00:20:08.192 { 00:20:08.192 "name": "spare", 00:20:08.192 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:08.192 "is_configured": true, 00:20:08.192 "data_offset": 0, 00:20:08.192 "data_size": 65536 00:20:08.192 }, 00:20:08.192 { 00:20:08.192 "name": null, 00:20:08.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.192 "is_configured": false, 00:20:08.192 "data_offset": 0, 00:20:08.192 "data_size": 65536 00:20:08.192 }, 00:20:08.192 { 00:20:08.192 "name": "BaseBdev3", 00:20:08.192 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:08.192 "is_configured": true, 00:20:08.192 "data_offset": 0, 00:20:08.192 "data_size": 65536 00:20:08.192 }, 00:20:08.192 { 00:20:08.192 "name": "BaseBdev4", 00:20:08.192 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:08.192 "is_configured": true, 00:20:08.192 "data_offset": 0, 00:20:08.192 "data_size": 65536 00:20:08.192 } 00:20:08.192 ] 00:20:08.192 }' 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=673 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.192 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.451 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:08.451 "name": "raid_bdev1", 00:20:08.451 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:08.451 "strip_size_kb": 0, 00:20:08.451 "state": "online", 00:20:08.451 "raid_level": "raid1", 00:20:08.451 "superblock": false, 00:20:08.451 "num_base_bdevs": 4, 00:20:08.451 "num_base_bdevs_discovered": 3, 00:20:08.451 "num_base_bdevs_operational": 3, 00:20:08.451 "process": { 00:20:08.451 "type": "rebuild", 00:20:08.451 "target": "spare", 00:20:08.451 "progress": { 00:20:08.451 "blocks": 38912, 00:20:08.451 "percent": 59 00:20:08.451 } 00:20:08.451 }, 00:20:08.451 "base_bdevs_list": [ 00:20:08.451 { 00:20:08.451 "name": "spare", 00:20:08.451 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:08.451 "is_configured": true, 00:20:08.451 "data_offset": 0, 00:20:08.451 "data_size": 65536 00:20:08.451 }, 00:20:08.451 { 00:20:08.451 "name": null, 00:20:08.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.451 "is_configured": false, 00:20:08.451 "data_offset": 0, 00:20:08.451 "data_size": 65536 00:20:08.451 }, 00:20:08.451 { 00:20:08.451 "name": "BaseBdev3", 00:20:08.451 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:08.451 "is_configured": true, 00:20:08.451 "data_offset": 0, 00:20:08.451 "data_size": 65536 00:20:08.451 }, 00:20:08.451 { 00:20:08.451 "name": "BaseBdev4", 00:20:08.451 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:08.451 "is_configured": true, 00:20:08.451 "data_offset": 0, 00:20:08.451 "data_size": 65536 00:20:08.451 } 00:20:08.451 ] 00:20:08.451 }' 00:20:08.451 10:27:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:08.451 10:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.451 10:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:08.451 10:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.451 10:27:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.385 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.644 [2024-07-15 10:27:34.244553] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:09.644 [2024-07-15 10:27:34.244591] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:09.644 [2024-07-15 10:27:34.244618] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.644 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:09.644 "name": "raid_bdev1", 00:20:09.644 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:09.644 "strip_size_kb": 0, 00:20:09.644 "state": "online", 00:20:09.644 "raid_level": "raid1", 00:20:09.644 "superblock": false, 00:20:09.644 "num_base_bdevs": 4, 00:20:09.644 "num_base_bdevs_discovered": 3, 00:20:09.644 "num_base_bdevs_operational": 3, 00:20:09.644 "process": { 00:20:09.644 "type": "rebuild", 00:20:09.644 "target": "spare", 00:20:09.644 "progress": { 00:20:09.644 "blocks": 63488, 00:20:09.644 "percent": 96 00:20:09.644 } 00:20:09.644 }, 00:20:09.644 "base_bdevs_list": [ 00:20:09.644 { 00:20:09.644 "name": "spare", 00:20:09.644 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:09.644 "is_configured": true, 00:20:09.644 "data_offset": 0, 00:20:09.644 "data_size": 65536 00:20:09.644 }, 00:20:09.644 { 00:20:09.644 "name": null, 00:20:09.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.644 "is_configured": false, 00:20:09.644 "data_offset": 0, 00:20:09.644 "data_size": 65536 00:20:09.644 }, 00:20:09.644 { 00:20:09.644 "name": "BaseBdev3", 00:20:09.644 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:09.644 "is_configured": true, 00:20:09.644 "data_offset": 0, 00:20:09.644 "data_size": 65536 00:20:09.644 }, 00:20:09.644 { 00:20:09.644 "name": "BaseBdev4", 00:20:09.644 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:09.644 "is_configured": true, 00:20:09.644 "data_offset": 0, 00:20:09.644 "data_size": 65536 00:20:09.644 } 00:20:09.644 ] 00:20:09.644 }' 00:20:09.644 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:09.644 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.644 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:09.644 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.644 10:27:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.579 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.837 "name": "raid_bdev1", 00:20:10.837 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:10.837 "strip_size_kb": 0, 00:20:10.837 "state": "online", 00:20:10.837 "raid_level": "raid1", 00:20:10.837 "superblock": false, 00:20:10.837 "num_base_bdevs": 4, 00:20:10.837 "num_base_bdevs_discovered": 3, 00:20:10.837 "num_base_bdevs_operational": 3, 00:20:10.837 "base_bdevs_list": [ 00:20:10.837 { 00:20:10.837 "name": "spare", 00:20:10.837 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:10.837 "is_configured": true, 00:20:10.837 "data_offset": 0, 00:20:10.837 "data_size": 65536 00:20:10.837 }, 00:20:10.837 { 00:20:10.837 "name": null, 00:20:10.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.837 "is_configured": false, 00:20:10.837 "data_offset": 0, 00:20:10.837 "data_size": 65536 00:20:10.837 }, 00:20:10.837 { 00:20:10.837 "name": "BaseBdev3", 00:20:10.837 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:10.837 "is_configured": true, 00:20:10.837 "data_offset": 0, 00:20:10.837 "data_size": 65536 00:20:10.837 }, 00:20:10.837 { 00:20:10.837 "name": "BaseBdev4", 00:20:10.837 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:10.837 "is_configured": true, 00:20:10.837 "data_offset": 0, 00:20:10.837 "data_size": 65536 00:20:10.837 } 00:20:10.837 ] 00:20:10.837 }' 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.837 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:11.096 "name": "raid_bdev1", 00:20:11.096 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:11.096 "strip_size_kb": 0, 00:20:11.096 "state": "online", 00:20:11.096 "raid_level": "raid1", 00:20:11.096 "superblock": false, 00:20:11.096 "num_base_bdevs": 4, 00:20:11.096 "num_base_bdevs_discovered": 3, 00:20:11.096 "num_base_bdevs_operational": 3, 00:20:11.096 "base_bdevs_list": [ 00:20:11.096 { 00:20:11.096 "name": "spare", 00:20:11.096 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:11.096 "is_configured": true, 00:20:11.096 "data_offset": 0, 00:20:11.096 "data_size": 65536 00:20:11.096 }, 00:20:11.096 { 00:20:11.096 "name": null, 00:20:11.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.096 "is_configured": false, 00:20:11.096 "data_offset": 0, 00:20:11.096 "data_size": 65536 00:20:11.096 }, 00:20:11.096 { 00:20:11.096 "name": "BaseBdev3", 00:20:11.096 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:11.096 "is_configured": true, 00:20:11.096 "data_offset": 0, 00:20:11.096 "data_size": 65536 00:20:11.096 }, 00:20:11.096 { 00:20:11.096 "name": "BaseBdev4", 00:20:11.096 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:11.096 "is_configured": true, 00:20:11.096 "data_offset": 0, 00:20:11.096 "data_size": 65536 00:20:11.096 } 00:20:11.096 ] 00:20:11.096 }' 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.096 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.097 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.097 10:27:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.355 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.355 "name": "raid_bdev1", 00:20:11.355 "uuid": "04f83f03-36b8-4193-9e18-f2ed7ac5e60b", 00:20:11.355 "strip_size_kb": 0, 00:20:11.355 "state": "online", 00:20:11.355 "raid_level": "raid1", 00:20:11.355 "superblock": false, 00:20:11.355 "num_base_bdevs": 4, 00:20:11.355 "num_base_bdevs_discovered": 3, 00:20:11.355 "num_base_bdevs_operational": 3, 00:20:11.355 "base_bdevs_list": [ 00:20:11.355 { 00:20:11.355 "name": "spare", 00:20:11.355 "uuid": "7d9b620e-a7e0-590a-86d9-2146f7aeb456", 00:20:11.355 "is_configured": true, 00:20:11.355 "data_offset": 0, 00:20:11.355 "data_size": 65536 00:20:11.355 }, 00:20:11.355 { 00:20:11.355 "name": null, 00:20:11.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.355 "is_configured": false, 00:20:11.355 "data_offset": 0, 00:20:11.355 "data_size": 65536 00:20:11.355 }, 00:20:11.355 { 00:20:11.355 "name": "BaseBdev3", 00:20:11.355 "uuid": "45916872-4cd0-5100-963f-cb7edb7fe3f6", 00:20:11.355 "is_configured": true, 00:20:11.355 "data_offset": 0, 00:20:11.355 "data_size": 65536 00:20:11.355 }, 00:20:11.355 { 00:20:11.355 "name": "BaseBdev4", 00:20:11.355 "uuid": "0bcf2402-3d2a-5b4b-b84e-48fa471ccaf2", 00:20:11.355 "is_configured": true, 00:20:11.355 "data_offset": 0, 00:20:11.355 "data_size": 65536 00:20:11.355 } 00:20:11.355 ] 00:20:11.355 }' 00:20:11.355 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.355 10:27:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.922 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:11.922 [2024-07-15 10:27:36.642242] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.922 [2024-07-15 10:27:36.642263] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.922 [2024-07-15 10:27:36.642302] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.922 [2024-07-15 10:27:36.642347] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.922 [2024-07-15 10:27:36.642354] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfba5b0 name raid_bdev1, state offline 00:20:11.922 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.922 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:12.181 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:12.182 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:12.182 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:12.182 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:12.182 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:12.182 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:12.182 10:27:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:12.440 /dev/nbd0 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:12.440 1+0 records in 00:20:12.440 1+0 records out 00:20:12.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269776 s, 15.2 MB/s 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:12.440 /dev/nbd1 00:20:12.440 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:12.698 1+0 records in 00:20:12.698 1+0 records out 00:20:12.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317752 s, 12.9 MB/s 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.698 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 1865134 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 1865134 ']' 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 1865134 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.956 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1865134 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1865134' 00:20:13.215 killing process with pid 1865134 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 1865134 00:20:13.215 Received shutdown signal, test time was about 60.000000 seconds 00:20:13.215 00:20:13.215 Latency(us) 00:20:13.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.215 =================================================================================================================== 00:20:13.215 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.215 [2024-07-15 10:27:37.761072] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 1865134 00:20:13.215 [2024-07-15 10:27:37.798198] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:20:13.215 00:20:13.215 real 0m19.641s 00:20:13.215 user 0m26.216s 00:20:13.215 sys 0m3.873s 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.215 10:27:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.215 ************************************ 00:20:13.215 END TEST raid_rebuild_test 00:20:13.215 ************************************ 00:20:13.473 10:27:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:13.473 10:27:38 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:20:13.473 10:27:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:13.473 10:27:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.473 10:27:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.473 ************************************ 00:20:13.473 START TEST raid_rebuild_test_sb 00:20:13.473 ************************************ 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:13.473 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=1868777 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 1868777 /var/tmp/spdk-raid.sock 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 1868777 ']' 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:13.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.474 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.474 [2024-07-15 10:27:38.124063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:13.474 [2024-07-15 10:27:38.124115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868777 ] 00:20:13.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:13.474 Zero copy mechanism will not be used. 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:13.474 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.474 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:13.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.475 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:13.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:13.475 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:13.475 [2024-07-15 10:27:38.216563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.733 [2024-07-15 10:27:38.288367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.733 [2024-07-15 10:27:38.341052] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:13.733 [2024-07-15 10:27:38.341081] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.300 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.300 10:27:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:14.300 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:14.300 10:27:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:14.300 BaseBdev1_malloc 00:20:14.300 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:14.558 [2024-07-15 10:27:39.232887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:14.558 [2024-07-15 10:27:39.232932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.558 [2024-07-15 10:27:39.232950] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10fa5f0 00:20:14.558 [2024-07-15 10:27:39.232959] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.558 [2024-07-15 10:27:39.234140] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.558 [2024-07-15 10:27:39.234165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:14.558 BaseBdev1 00:20:14.558 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:14.558 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:14.816 BaseBdev2_malloc 00:20:14.816 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:14.816 [2024-07-15 10:27:39.557298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:14.816 [2024-07-15 10:27:39.557332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.816 [2024-07-15 10:27:39.557346] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x129e130 00:20:14.816 [2024-07-15 10:27:39.557371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.816 [2024-07-15 10:27:39.558385] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.816 [2024-07-15 10:27:39.558408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:14.816 BaseBdev2 00:20:14.816 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:14.817 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:15.075 BaseBdev3_malloc 00:20:15.075 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:15.333 [2024-07-15 10:27:39.897545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:15.333 [2024-07-15 10:27:39.897574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.333 [2024-07-15 10:27:39.897588] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1294420 00:20:15.333 [2024-07-15 10:27:39.897612] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.333 [2024-07-15 10:27:39.898583] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.333 [2024-07-15 10:27:39.898607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:15.333 BaseBdev3 00:20:15.333 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:15.333 10:27:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:15.333 BaseBdev4_malloc 00:20:15.333 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:15.591 [2024-07-15 10:27:40.241909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:15.591 [2024-07-15 10:27:40.241960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.591 [2024-07-15 10:27:40.241976] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1294d40 00:20:15.591 [2024-07-15 10:27:40.241985] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.591 [2024-07-15 10:27:40.243005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.591 [2024-07-15 10:27:40.243028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:15.591 BaseBdev4 00:20:15.591 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:15.849 spare_malloc 00:20:15.849 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:15.849 spare_delay 00:20:15.849 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:16.107 [2024-07-15 10:27:40.767063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.107 [2024-07-15 10:27:40.767098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.107 [2024-07-15 10:27:40.767115] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f3db0 00:20:16.107 [2024-07-15 10:27:40.767124] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.107 [2024-07-15 10:27:40.768166] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.107 [2024-07-15 10:27:40.768187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.107 spare 00:20:16.108 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:16.410 [2024-07-15 10:27:40.931507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.410 [2024-07-15 10:27:40.932296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.410 [2024-07-15 10:27:40.932330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.410 [2024-07-15 10:27:40.932357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:16.410 [2024-07-15 10:27:40.932480] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10f65b0 00:20:16.410 [2024-07-15 10:27:40.932487] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:16.410 [2024-07-15 10:27:40.932604] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f6580 00:20:16.410 [2024-07-15 10:27:40.932696] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10f65b0 00:20:16.410 [2024-07-15 10:27:40.932702] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10f65b0 00:20:16.410 [2024-07-15 10:27:40.932759] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.410 10:27:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.410 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.410 "name": "raid_bdev1", 00:20:16.410 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:16.410 "strip_size_kb": 0, 00:20:16.410 "state": "online", 00:20:16.410 "raid_level": "raid1", 00:20:16.410 "superblock": true, 00:20:16.410 "num_base_bdevs": 4, 00:20:16.410 "num_base_bdevs_discovered": 4, 00:20:16.410 "num_base_bdevs_operational": 4, 00:20:16.410 "base_bdevs_list": [ 00:20:16.410 { 00:20:16.410 "name": "BaseBdev1", 00:20:16.410 "uuid": "197de8a0-c217-5f22-beb5-7420b468b994", 00:20:16.410 "is_configured": true, 00:20:16.410 "data_offset": 2048, 00:20:16.410 "data_size": 63488 00:20:16.410 }, 00:20:16.410 { 00:20:16.410 "name": "BaseBdev2", 00:20:16.410 "uuid": "321fafe5-d83e-5c55-a806-2b3d73c3c9a2", 00:20:16.411 "is_configured": true, 00:20:16.411 "data_offset": 2048, 00:20:16.411 "data_size": 63488 00:20:16.411 }, 00:20:16.411 { 00:20:16.411 "name": "BaseBdev3", 00:20:16.411 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:16.411 "is_configured": true, 00:20:16.411 "data_offset": 2048, 00:20:16.411 "data_size": 63488 00:20:16.411 }, 00:20:16.411 { 00:20:16.411 "name": "BaseBdev4", 00:20:16.411 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:16.411 "is_configured": true, 00:20:16.411 "data_offset": 2048, 00:20:16.411 "data_size": 63488 00:20:16.411 } 00:20:16.411 ] 00:20:16.411 }' 00:20:16.411 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.411 10:27:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.978 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:16.978 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:16.978 [2024-07-15 10:27:41.757806] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.236 10:27:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:17.494 [2024-07-15 10:27:42.110580] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1293c60 00:20:17.494 /dev/nbd0 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.494 1+0 records in 00:20:17.494 1+0 records out 00:20:17.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282771 s, 14.5 MB/s 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:20:17.494 10:27:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:22.757 63488+0 records in 00:20:22.757 63488+0 records out 00:20:22.757 32505856 bytes (33 MB, 31 MiB) copied, 4.91778 s, 6.6 MB/s 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:22.757 [2024-07-15 10:27:47.291479] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:22.757 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:22.758 [2024-07-15 10:27:47.451563] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.758 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.030 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:23.030 "name": "raid_bdev1", 00:20:23.030 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:23.030 "strip_size_kb": 0, 00:20:23.030 "state": "online", 00:20:23.030 "raid_level": "raid1", 00:20:23.030 "superblock": true, 00:20:23.030 "num_base_bdevs": 4, 00:20:23.030 "num_base_bdevs_discovered": 3, 00:20:23.030 "num_base_bdevs_operational": 3, 00:20:23.030 "base_bdevs_list": [ 00:20:23.030 { 00:20:23.030 "name": null, 00:20:23.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.030 "is_configured": false, 00:20:23.030 "data_offset": 2048, 00:20:23.030 "data_size": 63488 00:20:23.030 }, 00:20:23.030 { 00:20:23.030 "name": "BaseBdev2", 00:20:23.030 "uuid": "321fafe5-d83e-5c55-a806-2b3d73c3c9a2", 00:20:23.030 "is_configured": true, 00:20:23.030 "data_offset": 2048, 00:20:23.030 "data_size": 63488 00:20:23.030 }, 00:20:23.030 { 00:20:23.030 "name": "BaseBdev3", 00:20:23.030 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:23.030 "is_configured": true, 00:20:23.030 "data_offset": 2048, 00:20:23.030 "data_size": 63488 00:20:23.030 }, 00:20:23.030 { 00:20:23.030 "name": "BaseBdev4", 00:20:23.030 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:23.030 "is_configured": true, 00:20:23.030 "data_offset": 2048, 00:20:23.030 "data_size": 63488 00:20:23.030 } 00:20:23.030 ] 00:20:23.030 }' 00:20:23.030 10:27:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:23.030 10:27:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.594 10:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:23.594 [2024-07-15 10:27:48.301754] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:23.594 [2024-07-15 10:27:48.305397] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f6580 00:20:23.595 [2024-07-15 10:27:48.307005] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.595 10:27:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.967 "name": "raid_bdev1", 00:20:24.967 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:24.967 "strip_size_kb": 0, 00:20:24.967 "state": "online", 00:20:24.967 "raid_level": "raid1", 00:20:24.967 "superblock": true, 00:20:24.967 "num_base_bdevs": 4, 00:20:24.967 "num_base_bdevs_discovered": 4, 00:20:24.967 "num_base_bdevs_operational": 4, 00:20:24.967 "process": { 00:20:24.967 "type": "rebuild", 00:20:24.967 "target": "spare", 00:20:24.967 "progress": { 00:20:24.967 "blocks": 22528, 00:20:24.967 "percent": 35 00:20:24.967 } 00:20:24.967 }, 00:20:24.967 "base_bdevs_list": [ 00:20:24.967 { 00:20:24.967 "name": "spare", 00:20:24.967 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 2048, 00:20:24.967 "data_size": 63488 00:20:24.967 }, 00:20:24.967 { 00:20:24.967 "name": "BaseBdev2", 00:20:24.967 "uuid": "321fafe5-d83e-5c55-a806-2b3d73c3c9a2", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 2048, 00:20:24.967 "data_size": 63488 00:20:24.967 }, 00:20:24.967 { 00:20:24.967 "name": "BaseBdev3", 00:20:24.967 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 2048, 00:20:24.967 "data_size": 63488 00:20:24.967 }, 00:20:24.967 { 00:20:24.967 "name": "BaseBdev4", 00:20:24.967 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 2048, 00:20:24.967 "data_size": 63488 00:20:24.967 } 00:20:24.967 ] 00:20:24.967 }' 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.967 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:24.967 [2024-07-15 10:27:49.739356] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.225 [2024-07-15 10:27:49.817413] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.225 [2024-07-15 10:27:49.817447] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.225 [2024-07-15 10:27:49.817458] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.225 [2024-07-15 10:27:49.817480] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.225 10:27:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.484 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.484 "name": "raid_bdev1", 00:20:25.484 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:25.484 "strip_size_kb": 0, 00:20:25.484 "state": "online", 00:20:25.484 "raid_level": "raid1", 00:20:25.484 "superblock": true, 00:20:25.484 "num_base_bdevs": 4, 00:20:25.484 "num_base_bdevs_discovered": 3, 00:20:25.484 "num_base_bdevs_operational": 3, 00:20:25.484 "base_bdevs_list": [ 00:20:25.484 { 00:20:25.484 "name": null, 00:20:25.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.484 "is_configured": false, 00:20:25.484 "data_offset": 2048, 00:20:25.484 "data_size": 63488 00:20:25.484 }, 00:20:25.484 { 00:20:25.484 "name": "BaseBdev2", 00:20:25.484 "uuid": "321fafe5-d83e-5c55-a806-2b3d73c3c9a2", 00:20:25.484 "is_configured": true, 00:20:25.484 "data_offset": 2048, 00:20:25.484 "data_size": 63488 00:20:25.484 }, 00:20:25.484 { 00:20:25.484 "name": "BaseBdev3", 00:20:25.484 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:25.484 "is_configured": true, 00:20:25.484 "data_offset": 2048, 00:20:25.484 "data_size": 63488 00:20:25.484 }, 00:20:25.484 { 00:20:25.484 "name": "BaseBdev4", 00:20:25.484 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:25.484 "is_configured": true, 00:20:25.484 "data_offset": 2048, 00:20:25.484 "data_size": 63488 00:20:25.484 } 00:20:25.484 ] 00:20:25.484 }' 00:20:25.484 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.484 10:27:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.742 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.000 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:26.000 "name": "raid_bdev1", 00:20:26.000 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:26.000 "strip_size_kb": 0, 00:20:26.000 "state": "online", 00:20:26.000 "raid_level": "raid1", 00:20:26.000 "superblock": true, 00:20:26.000 "num_base_bdevs": 4, 00:20:26.000 "num_base_bdevs_discovered": 3, 00:20:26.000 "num_base_bdevs_operational": 3, 00:20:26.000 "base_bdevs_list": [ 00:20:26.000 { 00:20:26.000 "name": null, 00:20:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.000 "is_configured": false, 00:20:26.000 "data_offset": 2048, 00:20:26.000 "data_size": 63488 00:20:26.000 }, 00:20:26.000 { 00:20:26.000 "name": "BaseBdev2", 00:20:26.000 "uuid": "321fafe5-d83e-5c55-a806-2b3d73c3c9a2", 00:20:26.000 "is_configured": true, 00:20:26.000 "data_offset": 2048, 00:20:26.000 "data_size": 63488 00:20:26.000 }, 00:20:26.000 { 00:20:26.000 "name": "BaseBdev3", 00:20:26.000 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:26.000 "is_configured": true, 00:20:26.000 "data_offset": 2048, 00:20:26.000 "data_size": 63488 00:20:26.000 }, 00:20:26.000 { 00:20:26.000 "name": "BaseBdev4", 00:20:26.000 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:26.000 "is_configured": true, 00:20:26.000 "data_offset": 2048, 00:20:26.000 "data_size": 63488 00:20:26.000 } 00:20:26.000 ] 00:20:26.000 }' 00:20:26.000 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:26.000 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:26.000 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:26.000 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:26.000 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.258 [2024-07-15 10:27:50.891778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.258 [2024-07-15 10:27:50.895309] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1293c40 00:20:26.258 [2024-07-15 10:27:50.896393] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.258 10:27:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:27.191 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.192 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:27.192 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:27.192 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:27.192 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:27.192 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.192 10:27:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.449 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:27.449 "name": "raid_bdev1", 00:20:27.449 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:27.449 "strip_size_kb": 0, 00:20:27.449 "state": "online", 00:20:27.449 "raid_level": "raid1", 00:20:27.449 "superblock": true, 00:20:27.449 "num_base_bdevs": 4, 00:20:27.449 "num_base_bdevs_discovered": 4, 00:20:27.449 "num_base_bdevs_operational": 4, 00:20:27.449 "process": { 00:20:27.449 "type": "rebuild", 00:20:27.449 "target": "spare", 00:20:27.449 "progress": { 00:20:27.449 "blocks": 22528, 00:20:27.449 "percent": 35 00:20:27.449 } 00:20:27.449 }, 00:20:27.449 "base_bdevs_list": [ 00:20:27.449 { 00:20:27.449 "name": "spare", 00:20:27.449 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:27.449 "is_configured": true, 00:20:27.449 "data_offset": 2048, 00:20:27.449 "data_size": 63488 00:20:27.449 }, 00:20:27.449 { 00:20:27.449 "name": "BaseBdev2", 00:20:27.449 "uuid": "321fafe5-d83e-5c55-a806-2b3d73c3c9a2", 00:20:27.449 "is_configured": true, 00:20:27.449 "data_offset": 2048, 00:20:27.449 "data_size": 63488 00:20:27.449 }, 00:20:27.449 { 00:20:27.449 "name": "BaseBdev3", 00:20:27.449 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:27.449 "is_configured": true, 00:20:27.449 "data_offset": 2048, 00:20:27.449 "data_size": 63488 00:20:27.449 }, 00:20:27.449 { 00:20:27.449 "name": "BaseBdev4", 00:20:27.449 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:27.449 "is_configured": true, 00:20:27.449 "data_offset": 2048, 00:20:27.449 "data_size": 63488 00:20:27.450 } 00:20:27.450 ] 00:20:27.450 }' 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:20:27.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:27.450 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:27.707 [2024-07-15 10:27:52.308569] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:27.965 [2024-07-15 10:27:52.507128] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1293c40 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:27.965 "name": "raid_bdev1", 00:20:27.965 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:27.965 "strip_size_kb": 0, 00:20:27.965 "state": "online", 00:20:27.965 "raid_level": "raid1", 00:20:27.965 "superblock": true, 00:20:27.965 "num_base_bdevs": 4, 00:20:27.965 "num_base_bdevs_discovered": 3, 00:20:27.965 "num_base_bdevs_operational": 3, 00:20:27.965 "process": { 00:20:27.965 "type": "rebuild", 00:20:27.965 "target": "spare", 00:20:27.965 "progress": { 00:20:27.965 "blocks": 32768, 00:20:27.965 "percent": 51 00:20:27.965 } 00:20:27.965 }, 00:20:27.965 "base_bdevs_list": [ 00:20:27.965 { 00:20:27.965 "name": "spare", 00:20:27.965 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:27.965 "is_configured": true, 00:20:27.965 "data_offset": 2048, 00:20:27.965 "data_size": 63488 00:20:27.965 }, 00:20:27.965 { 00:20:27.965 "name": null, 00:20:27.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.965 "is_configured": false, 00:20:27.965 "data_offset": 2048, 00:20:27.965 "data_size": 63488 00:20:27.965 }, 00:20:27.965 { 00:20:27.965 "name": "BaseBdev3", 00:20:27.965 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:27.965 "is_configured": true, 00:20:27.965 "data_offset": 2048, 00:20:27.965 "data_size": 63488 00:20:27.965 }, 00:20:27.965 { 00:20:27.965 "name": "BaseBdev4", 00:20:27.965 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:27.965 "is_configured": true, 00:20:27.965 "data_offset": 2048, 00:20:27.965 "data_size": 63488 00:20:27.965 } 00:20:27.965 ] 00:20:27.965 }' 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:27.965 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.966 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=693 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:28.224 "name": "raid_bdev1", 00:20:28.224 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:28.224 "strip_size_kb": 0, 00:20:28.224 "state": "online", 00:20:28.224 "raid_level": "raid1", 00:20:28.224 "superblock": true, 00:20:28.224 "num_base_bdevs": 4, 00:20:28.224 "num_base_bdevs_discovered": 3, 00:20:28.224 "num_base_bdevs_operational": 3, 00:20:28.224 "process": { 00:20:28.224 "type": "rebuild", 00:20:28.224 "target": "spare", 00:20:28.224 "progress": { 00:20:28.224 "blocks": 38912, 00:20:28.224 "percent": 61 00:20:28.224 } 00:20:28.224 }, 00:20:28.224 "base_bdevs_list": [ 00:20:28.224 { 00:20:28.224 "name": "spare", 00:20:28.224 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:28.224 "is_configured": true, 00:20:28.224 "data_offset": 2048, 00:20:28.224 "data_size": 63488 00:20:28.224 }, 00:20:28.224 { 00:20:28.224 "name": null, 00:20:28.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.224 "is_configured": false, 00:20:28.224 "data_offset": 2048, 00:20:28.224 "data_size": 63488 00:20:28.224 }, 00:20:28.224 { 00:20:28.224 "name": "BaseBdev3", 00:20:28.224 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:28.224 "is_configured": true, 00:20:28.224 "data_offset": 2048, 00:20:28.224 "data_size": 63488 00:20:28.224 }, 00:20:28.224 { 00:20:28.224 "name": "BaseBdev4", 00:20:28.224 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:28.224 "is_configured": true, 00:20:28.224 "data_offset": 2048, 00:20:28.224 "data_size": 63488 00:20:28.224 } 00:20:28.224 ] 00:20:28.224 }' 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.224 10:27:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:28.224 10:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.482 10:27:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.414 [2024-07-15 10:27:54.118314] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:29.414 [2024-07-15 10:27:54.118358] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:29.414 [2024-07-15 10:27:54.118450] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:29.414 "name": "raid_bdev1", 00:20:29.414 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:29.414 "strip_size_kb": 0, 00:20:29.414 "state": "online", 00:20:29.414 "raid_level": "raid1", 00:20:29.414 "superblock": true, 00:20:29.414 "num_base_bdevs": 4, 00:20:29.414 "num_base_bdevs_discovered": 3, 00:20:29.414 "num_base_bdevs_operational": 3, 00:20:29.414 "base_bdevs_list": [ 00:20:29.414 { 00:20:29.414 "name": "spare", 00:20:29.414 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:29.414 "is_configured": true, 00:20:29.414 "data_offset": 2048, 00:20:29.414 "data_size": 63488 00:20:29.414 }, 00:20:29.414 { 00:20:29.414 "name": null, 00:20:29.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.414 "is_configured": false, 00:20:29.414 "data_offset": 2048, 00:20:29.414 "data_size": 63488 00:20:29.414 }, 00:20:29.414 { 00:20:29.414 "name": "BaseBdev3", 00:20:29.414 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:29.414 "is_configured": true, 00:20:29.414 "data_offset": 2048, 00:20:29.414 "data_size": 63488 00:20:29.414 }, 00:20:29.414 { 00:20:29.414 "name": "BaseBdev4", 00:20:29.414 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:29.414 "is_configured": true, 00:20:29.414 "data_offset": 2048, 00:20:29.414 "data_size": 63488 00:20:29.414 } 00:20:29.414 ] 00:20:29.414 }' 00:20:29.414 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:29.671 "name": "raid_bdev1", 00:20:29.671 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:29.671 "strip_size_kb": 0, 00:20:29.671 "state": "online", 00:20:29.671 "raid_level": "raid1", 00:20:29.671 "superblock": true, 00:20:29.671 "num_base_bdevs": 4, 00:20:29.671 "num_base_bdevs_discovered": 3, 00:20:29.671 "num_base_bdevs_operational": 3, 00:20:29.671 "base_bdevs_list": [ 00:20:29.671 { 00:20:29.671 "name": "spare", 00:20:29.671 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:29.671 "is_configured": true, 00:20:29.671 "data_offset": 2048, 00:20:29.671 "data_size": 63488 00:20:29.671 }, 00:20:29.671 { 00:20:29.671 "name": null, 00:20:29.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.671 "is_configured": false, 00:20:29.671 "data_offset": 2048, 00:20:29.671 "data_size": 63488 00:20:29.671 }, 00:20:29.671 { 00:20:29.671 "name": "BaseBdev3", 00:20:29.671 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:29.671 "is_configured": true, 00:20:29.671 "data_offset": 2048, 00:20:29.671 "data_size": 63488 00:20:29.671 }, 00:20:29.671 { 00:20:29.671 "name": "BaseBdev4", 00:20:29.671 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:29.671 "is_configured": true, 00:20:29.671 "data_offset": 2048, 00:20:29.671 "data_size": 63488 00:20:29.671 } 00:20:29.671 ] 00:20:29.671 }' 00:20:29.671 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.928 "name": "raid_bdev1", 00:20:29.928 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:29.928 "strip_size_kb": 0, 00:20:29.928 "state": "online", 00:20:29.928 "raid_level": "raid1", 00:20:29.928 "superblock": true, 00:20:29.928 "num_base_bdevs": 4, 00:20:29.928 "num_base_bdevs_discovered": 3, 00:20:29.928 "num_base_bdevs_operational": 3, 00:20:29.928 "base_bdevs_list": [ 00:20:29.928 { 00:20:29.928 "name": "spare", 00:20:29.928 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:29.928 "is_configured": true, 00:20:29.928 "data_offset": 2048, 00:20:29.928 "data_size": 63488 00:20:29.928 }, 00:20:29.928 { 00:20:29.928 "name": null, 00:20:29.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.928 "is_configured": false, 00:20:29.928 "data_offset": 2048, 00:20:29.928 "data_size": 63488 00:20:29.928 }, 00:20:29.928 { 00:20:29.928 "name": "BaseBdev3", 00:20:29.928 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:29.928 "is_configured": true, 00:20:29.928 "data_offset": 2048, 00:20:29.928 "data_size": 63488 00:20:29.928 }, 00:20:29.928 { 00:20:29.928 "name": "BaseBdev4", 00:20:29.928 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:29.928 "is_configured": true, 00:20:29.928 "data_offset": 2048, 00:20:29.928 "data_size": 63488 00:20:29.928 } 00:20:29.928 ] 00:20:29.928 }' 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.928 10:27:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.501 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:30.761 [2024-07-15 10:27:55.345442] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.761 [2024-07-15 10:27:55.345465] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.761 [2024-07-15 10:27:55.345514] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.761 [2024-07-15 10:27:55.345562] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.761 [2024-07-15 10:27:55.345570] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f65b0 name raid_bdev1, state offline 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:30.761 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:31.018 /dev/nbd0 00:20:31.018 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:31.018 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:31.018 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:31.018 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.019 1+0 records in 00:20:31.019 1+0 records out 00:20:31.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253256 s, 16.2 MB/s 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:31.019 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:31.276 /dev/nbd1 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:31.276 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.277 1+0 records in 00:20:31.277 1+0 records out 00:20:31.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028027 s, 14.6 MB/s 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:31.277 10:27:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.277 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.534 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:31.791 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:32.049 [2024-07-15 10:27:56.728598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:32.049 [2024-07-15 10:27:56.728639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.049 [2024-07-15 10:27:56.728657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1293020 00:20:32.049 [2024-07-15 10:27:56.728665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.049 [2024-07-15 10:27:56.729886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.049 [2024-07-15 10:27:56.729917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:32.049 [2024-07-15 10:27:56.729980] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:32.049 [2024-07-15 10:27:56.730000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:32.049 [2024-07-15 10:27:56.730088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.049 [2024-07-15 10:27:56.730134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:32.049 spare 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.049 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.049 [2024-07-15 10:27:56.830425] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10f5270 00:20:32.049 [2024-07-15 10:27:56.830441] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:32.049 [2024-07-15 10:27:56.830575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f4410 00:20:32.049 [2024-07-15 10:27:56.830675] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10f5270 00:20:32.049 [2024-07-15 10:27:56.830682] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10f5270 00:20:32.049 [2024-07-15 10:27:56.830750] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.307 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.307 "name": "raid_bdev1", 00:20:32.307 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:32.307 "strip_size_kb": 0, 00:20:32.307 "state": "online", 00:20:32.307 "raid_level": "raid1", 00:20:32.307 "superblock": true, 00:20:32.307 "num_base_bdevs": 4, 00:20:32.307 "num_base_bdevs_discovered": 3, 00:20:32.307 "num_base_bdevs_operational": 3, 00:20:32.307 "base_bdevs_list": [ 00:20:32.307 { 00:20:32.307 "name": "spare", 00:20:32.307 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:32.307 "is_configured": true, 00:20:32.307 "data_offset": 2048, 00:20:32.307 "data_size": 63488 00:20:32.307 }, 00:20:32.307 { 00:20:32.307 "name": null, 00:20:32.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.307 "is_configured": false, 00:20:32.307 "data_offset": 2048, 00:20:32.307 "data_size": 63488 00:20:32.307 }, 00:20:32.307 { 00:20:32.307 "name": "BaseBdev3", 00:20:32.307 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:32.307 "is_configured": true, 00:20:32.307 "data_offset": 2048, 00:20:32.307 "data_size": 63488 00:20:32.307 }, 00:20:32.307 { 00:20:32.307 "name": "BaseBdev4", 00:20:32.307 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:32.307 "is_configured": true, 00:20:32.307 "data_offset": 2048, 00:20:32.307 "data_size": 63488 00:20:32.307 } 00:20:32.307 ] 00:20:32.307 }' 00:20:32.307 10:27:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.307 10:27:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:32.885 "name": "raid_bdev1", 00:20:32.885 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:32.885 "strip_size_kb": 0, 00:20:32.885 "state": "online", 00:20:32.885 "raid_level": "raid1", 00:20:32.885 "superblock": true, 00:20:32.885 "num_base_bdevs": 4, 00:20:32.885 "num_base_bdevs_discovered": 3, 00:20:32.885 "num_base_bdevs_operational": 3, 00:20:32.885 "base_bdevs_list": [ 00:20:32.885 { 00:20:32.885 "name": "spare", 00:20:32.885 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:32.885 "is_configured": true, 00:20:32.885 "data_offset": 2048, 00:20:32.885 "data_size": 63488 00:20:32.885 }, 00:20:32.885 { 00:20:32.885 "name": null, 00:20:32.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.885 "is_configured": false, 00:20:32.885 "data_offset": 2048, 00:20:32.885 "data_size": 63488 00:20:32.885 }, 00:20:32.885 { 00:20:32.885 "name": "BaseBdev3", 00:20:32.885 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:32.885 "is_configured": true, 00:20:32.885 "data_offset": 2048, 00:20:32.885 "data_size": 63488 00:20:32.885 }, 00:20:32.885 { 00:20:32.885 "name": "BaseBdev4", 00:20:32.885 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:32.885 "is_configured": true, 00:20:32.885 "data_offset": 2048, 00:20:32.885 "data_size": 63488 00:20:32.885 } 00:20:32.885 ] 00:20:32.885 }' 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.885 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:33.165 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.165 10:27:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:33.423 [2024-07-15 10:27:57.995949] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.423 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.423 "name": "raid_bdev1", 00:20:33.423 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:33.423 "strip_size_kb": 0, 00:20:33.423 "state": "online", 00:20:33.423 "raid_level": "raid1", 00:20:33.423 "superblock": true, 00:20:33.423 "num_base_bdevs": 4, 00:20:33.423 "num_base_bdevs_discovered": 2, 00:20:33.423 "num_base_bdevs_operational": 2, 00:20:33.423 "base_bdevs_list": [ 00:20:33.423 { 00:20:33.424 "name": null, 00:20:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.424 "is_configured": false, 00:20:33.424 "data_offset": 2048, 00:20:33.424 "data_size": 63488 00:20:33.424 }, 00:20:33.424 { 00:20:33.424 "name": null, 00:20:33.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.424 "is_configured": false, 00:20:33.424 "data_offset": 2048, 00:20:33.424 "data_size": 63488 00:20:33.424 }, 00:20:33.424 { 00:20:33.424 "name": "BaseBdev3", 00:20:33.424 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:33.424 "is_configured": true, 00:20:33.424 "data_offset": 2048, 00:20:33.424 "data_size": 63488 00:20:33.424 }, 00:20:33.424 { 00:20:33.424 "name": "BaseBdev4", 00:20:33.424 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:33.424 "is_configured": true, 00:20:33.424 "data_offset": 2048, 00:20:33.424 "data_size": 63488 00:20:33.424 } 00:20:33.424 ] 00:20:33.424 }' 00:20:33.424 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.424 10:27:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.990 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:34.249 [2024-07-15 10:27:58.802034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.249 [2024-07-15 10:27:58.802160] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:34.249 [2024-07-15 10:27:58.802174] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:34.249 [2024-07-15 10:27:58.802199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:34.249 [2024-07-15 10:27:58.805663] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f5920 00:20:34.249 [2024-07-15 10:27:58.807296] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:34.249 10:27:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.183 10:27:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.442 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:35.442 "name": "raid_bdev1", 00:20:35.442 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:35.442 "strip_size_kb": 0, 00:20:35.442 "state": "online", 00:20:35.442 "raid_level": "raid1", 00:20:35.442 "superblock": true, 00:20:35.442 "num_base_bdevs": 4, 00:20:35.442 "num_base_bdevs_discovered": 3, 00:20:35.442 "num_base_bdevs_operational": 3, 00:20:35.442 "process": { 00:20:35.442 "type": "rebuild", 00:20:35.442 "target": "spare", 00:20:35.442 "progress": { 00:20:35.442 "blocks": 22528, 00:20:35.442 "percent": 35 00:20:35.442 } 00:20:35.442 }, 00:20:35.442 "base_bdevs_list": [ 00:20:35.442 { 00:20:35.442 "name": "spare", 00:20:35.442 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:35.442 "is_configured": true, 00:20:35.442 "data_offset": 2048, 00:20:35.442 "data_size": 63488 00:20:35.442 }, 00:20:35.442 { 00:20:35.442 "name": null, 00:20:35.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.442 "is_configured": false, 00:20:35.442 "data_offset": 2048, 00:20:35.442 "data_size": 63488 00:20:35.442 }, 00:20:35.442 { 00:20:35.442 "name": "BaseBdev3", 00:20:35.442 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:35.442 "is_configured": true, 00:20:35.442 "data_offset": 2048, 00:20:35.442 "data_size": 63488 00:20:35.442 }, 00:20:35.442 { 00:20:35.442 "name": "BaseBdev4", 00:20:35.442 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:35.442 "is_configured": true, 00:20:35.442 "data_offset": 2048, 00:20:35.442 "data_size": 63488 00:20:35.442 } 00:20:35.442 ] 00:20:35.442 }' 00:20:35.442 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:35.442 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.442 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:35.442 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.442 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:35.442 [2024-07-15 10:28:00.215893] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.442 [2024-07-15 10:28:00.217068] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:35.442 [2024-07-15 10:28:00.217101] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.442 [2024-07-15 10:28:00.217112] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:35.442 [2024-07-15 10:28:00.217117] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.701 "name": "raid_bdev1", 00:20:35.701 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:35.701 "strip_size_kb": 0, 00:20:35.701 "state": "online", 00:20:35.701 "raid_level": "raid1", 00:20:35.701 "superblock": true, 00:20:35.701 "num_base_bdevs": 4, 00:20:35.701 "num_base_bdevs_discovered": 2, 00:20:35.701 "num_base_bdevs_operational": 2, 00:20:35.701 "base_bdevs_list": [ 00:20:35.701 { 00:20:35.701 "name": null, 00:20:35.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.701 "is_configured": false, 00:20:35.701 "data_offset": 2048, 00:20:35.701 "data_size": 63488 00:20:35.701 }, 00:20:35.701 { 00:20:35.701 "name": null, 00:20:35.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.701 "is_configured": false, 00:20:35.701 "data_offset": 2048, 00:20:35.701 "data_size": 63488 00:20:35.701 }, 00:20:35.701 { 00:20:35.701 "name": "BaseBdev3", 00:20:35.701 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:35.701 "is_configured": true, 00:20:35.701 "data_offset": 2048, 00:20:35.701 "data_size": 63488 00:20:35.701 }, 00:20:35.701 { 00:20:35.701 "name": "BaseBdev4", 00:20:35.701 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:35.701 "is_configured": true, 00:20:35.701 "data_offset": 2048, 00:20:35.701 "data_size": 63488 00:20:35.701 } 00:20:35.701 ] 00:20:35.701 }' 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.701 10:28:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.267 10:28:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:36.525 [2024-07-15 10:28:01.066802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:36.525 [2024-07-15 10:28:01.066844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.525 [2024-07-15 10:28:01.066862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f8790 00:20:36.525 [2024-07-15 10:28:01.066872] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.525 [2024-07-15 10:28:01.067161] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.525 [2024-07-15 10:28:01.067175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:36.525 [2024-07-15 10:28:01.067232] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:36.525 [2024-07-15 10:28:01.067240] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:36.525 [2024-07-15 10:28:01.067248] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:36.525 [2024-07-15 10:28:01.067261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.525 [2024-07-15 10:28:01.070723] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10fadb0 00:20:36.525 [2024-07-15 10:28:01.071696] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.525 spare 00:20:36.525 10:28:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.462 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.721 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.721 "name": "raid_bdev1", 00:20:37.721 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:37.721 "strip_size_kb": 0, 00:20:37.721 "state": "online", 00:20:37.721 "raid_level": "raid1", 00:20:37.721 "superblock": true, 00:20:37.721 "num_base_bdevs": 4, 00:20:37.721 "num_base_bdevs_discovered": 3, 00:20:37.721 "num_base_bdevs_operational": 3, 00:20:37.721 "process": { 00:20:37.721 "type": "rebuild", 00:20:37.721 "target": "spare", 00:20:37.721 "progress": { 00:20:37.721 "blocks": 22528, 00:20:37.721 "percent": 35 00:20:37.721 } 00:20:37.721 }, 00:20:37.721 "base_bdevs_list": [ 00:20:37.721 { 00:20:37.721 "name": "spare", 00:20:37.721 "uuid": "75abed6e-560e-518c-84a2-b8ad13058c98", 00:20:37.721 "is_configured": true, 00:20:37.721 "data_offset": 2048, 00:20:37.721 "data_size": 63488 00:20:37.721 }, 00:20:37.721 { 00:20:37.721 "name": null, 00:20:37.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.721 "is_configured": false, 00:20:37.721 "data_offset": 2048, 00:20:37.721 "data_size": 63488 00:20:37.721 }, 00:20:37.721 { 00:20:37.721 "name": "BaseBdev3", 00:20:37.721 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:37.721 "is_configured": true, 00:20:37.721 "data_offset": 2048, 00:20:37.721 "data_size": 63488 00:20:37.721 }, 00:20:37.721 { 00:20:37.721 "name": "BaseBdev4", 00:20:37.721 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:37.721 "is_configured": true, 00:20:37.721 "data_offset": 2048, 00:20:37.721 "data_size": 63488 00:20:37.721 } 00:20:37.721 ] 00:20:37.721 }' 00:20:37.721 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:37.721 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.721 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:37.721 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.721 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:37.721 [2024-07-15 10:28:02.503891] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.979 [2024-07-15 10:28:02.582048] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:37.980 [2024-07-15 10:28:02.582083] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.980 [2024-07-15 10:28:02.582094] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:37.980 [2024-07-15 10:28:02.582099] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.980 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.238 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.238 "name": "raid_bdev1", 00:20:38.238 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:38.238 "strip_size_kb": 0, 00:20:38.238 "state": "online", 00:20:38.238 "raid_level": "raid1", 00:20:38.238 "superblock": true, 00:20:38.238 "num_base_bdevs": 4, 00:20:38.238 "num_base_bdevs_discovered": 2, 00:20:38.238 "num_base_bdevs_operational": 2, 00:20:38.238 "base_bdevs_list": [ 00:20:38.238 { 00:20:38.238 "name": null, 00:20:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.238 "is_configured": false, 00:20:38.238 "data_offset": 2048, 00:20:38.238 "data_size": 63488 00:20:38.238 }, 00:20:38.238 { 00:20:38.238 "name": null, 00:20:38.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.238 "is_configured": false, 00:20:38.238 "data_offset": 2048, 00:20:38.238 "data_size": 63488 00:20:38.238 }, 00:20:38.238 { 00:20:38.238 "name": "BaseBdev3", 00:20:38.238 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:38.238 "is_configured": true, 00:20:38.238 "data_offset": 2048, 00:20:38.238 "data_size": 63488 00:20:38.238 }, 00:20:38.238 { 00:20:38.238 "name": "BaseBdev4", 00:20:38.238 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:38.238 "is_configured": true, 00:20:38.238 "data_offset": 2048, 00:20:38.238 "data_size": 63488 00:20:38.238 } 00:20:38.238 ] 00:20:38.238 }' 00:20:38.238 10:28:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.238 10:28:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.804 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.804 "name": "raid_bdev1", 00:20:38.804 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:38.804 "strip_size_kb": 0, 00:20:38.804 "state": "online", 00:20:38.804 "raid_level": "raid1", 00:20:38.804 "superblock": true, 00:20:38.804 "num_base_bdevs": 4, 00:20:38.804 "num_base_bdevs_discovered": 2, 00:20:38.804 "num_base_bdevs_operational": 2, 00:20:38.804 "base_bdevs_list": [ 00:20:38.804 { 00:20:38.804 "name": null, 00:20:38.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.804 "is_configured": false, 00:20:38.804 "data_offset": 2048, 00:20:38.804 "data_size": 63488 00:20:38.804 }, 00:20:38.804 { 00:20:38.804 "name": null, 00:20:38.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.804 "is_configured": false, 00:20:38.804 "data_offset": 2048, 00:20:38.804 "data_size": 63488 00:20:38.804 }, 00:20:38.804 { 00:20:38.804 "name": "BaseBdev3", 00:20:38.804 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:38.804 "is_configured": true, 00:20:38.804 "data_offset": 2048, 00:20:38.804 "data_size": 63488 00:20:38.804 }, 00:20:38.804 { 00:20:38.804 "name": "BaseBdev4", 00:20:38.804 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:38.804 "is_configured": true, 00:20:38.804 "data_offset": 2048, 00:20:38.805 "data_size": 63488 00:20:38.805 } 00:20:38.805 ] 00:20:38.805 }' 00:20:38.805 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:38.805 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:38.805 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:38.805 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:38.805 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:39.063 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:39.321 [2024-07-15 10:28:03.880654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:39.321 [2024-07-15 10:28:03.880694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.321 [2024-07-15 10:28:03.880726] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10f56e0 00:20:39.321 [2024-07-15 10:28:03.880735] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.321 [2024-07-15 10:28:03.880999] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.321 [2024-07-15 10:28:03.881012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:39.321 [2024-07-15 10:28:03.881067] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:39.321 [2024-07-15 10:28:03.881076] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:39.321 [2024-07-15 10:28:03.881083] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:39.321 BaseBdev1 00:20:39.321 10:28:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.256 10:28:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.515 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.515 "name": "raid_bdev1", 00:20:40.515 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:40.515 "strip_size_kb": 0, 00:20:40.515 "state": "online", 00:20:40.515 "raid_level": "raid1", 00:20:40.515 "superblock": true, 00:20:40.515 "num_base_bdevs": 4, 00:20:40.515 "num_base_bdevs_discovered": 2, 00:20:40.515 "num_base_bdevs_operational": 2, 00:20:40.515 "base_bdevs_list": [ 00:20:40.515 { 00:20:40.515 "name": null, 00:20:40.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.515 "is_configured": false, 00:20:40.515 "data_offset": 2048, 00:20:40.515 "data_size": 63488 00:20:40.515 }, 00:20:40.515 { 00:20:40.515 "name": null, 00:20:40.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.515 "is_configured": false, 00:20:40.515 "data_offset": 2048, 00:20:40.515 "data_size": 63488 00:20:40.515 }, 00:20:40.515 { 00:20:40.515 "name": "BaseBdev3", 00:20:40.515 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:40.515 "is_configured": true, 00:20:40.515 "data_offset": 2048, 00:20:40.515 "data_size": 63488 00:20:40.515 }, 00:20:40.515 { 00:20:40.515 "name": "BaseBdev4", 00:20:40.515 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:40.515 "is_configured": true, 00:20:40.515 "data_offset": 2048, 00:20:40.515 "data_size": 63488 00:20:40.515 } 00:20:40.515 ] 00:20:40.515 }' 00:20:40.515 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.515 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.773 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.773 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:40.773 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:40.773 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:41.032 "name": "raid_bdev1", 00:20:41.032 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:41.032 "strip_size_kb": 0, 00:20:41.032 "state": "online", 00:20:41.032 "raid_level": "raid1", 00:20:41.032 "superblock": true, 00:20:41.032 "num_base_bdevs": 4, 00:20:41.032 "num_base_bdevs_discovered": 2, 00:20:41.032 "num_base_bdevs_operational": 2, 00:20:41.032 "base_bdevs_list": [ 00:20:41.032 { 00:20:41.032 "name": null, 00:20:41.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.032 "is_configured": false, 00:20:41.032 "data_offset": 2048, 00:20:41.032 "data_size": 63488 00:20:41.032 }, 00:20:41.032 { 00:20:41.032 "name": null, 00:20:41.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.032 "is_configured": false, 00:20:41.032 "data_offset": 2048, 00:20:41.032 "data_size": 63488 00:20:41.032 }, 00:20:41.032 { 00:20:41.032 "name": "BaseBdev3", 00:20:41.032 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:41.032 "is_configured": true, 00:20:41.032 "data_offset": 2048, 00:20:41.032 "data_size": 63488 00:20:41.032 }, 00:20:41.032 { 00:20:41.032 "name": "BaseBdev4", 00:20:41.032 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:41.032 "is_configured": true, 00:20:41.032 "data_offset": 2048, 00:20:41.032 "data_size": 63488 00:20:41.032 } 00:20:41.032 ] 00:20:41.032 }' 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:41.032 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:41.290 [2024-07-15 10:28:05.982101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.290 [2024-07-15 10:28:05.982202] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:41.290 [2024-07-15 10:28:05.982213] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:41.290 request: 00:20:41.290 { 00:20:41.290 "base_bdev": "BaseBdev1", 00:20:41.290 "raid_bdev": "raid_bdev1", 00:20:41.290 "method": "bdev_raid_add_base_bdev", 00:20:41.290 "req_id": 1 00:20:41.290 } 00:20:41.290 Got JSON-RPC error response 00:20:41.290 response: 00:20:41.290 { 00:20:41.290 "code": -22, 00:20:41.290 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:41.290 } 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.290 10:28:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.227 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.485 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.485 "name": "raid_bdev1", 00:20:42.485 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:42.485 "strip_size_kb": 0, 00:20:42.485 "state": "online", 00:20:42.485 "raid_level": "raid1", 00:20:42.485 "superblock": true, 00:20:42.485 "num_base_bdevs": 4, 00:20:42.485 "num_base_bdevs_discovered": 2, 00:20:42.485 "num_base_bdevs_operational": 2, 00:20:42.485 "base_bdevs_list": [ 00:20:42.485 { 00:20:42.485 "name": null, 00:20:42.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.485 "is_configured": false, 00:20:42.485 "data_offset": 2048, 00:20:42.485 "data_size": 63488 00:20:42.485 }, 00:20:42.485 { 00:20:42.485 "name": null, 00:20:42.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.485 "is_configured": false, 00:20:42.485 "data_offset": 2048, 00:20:42.485 "data_size": 63488 00:20:42.485 }, 00:20:42.485 { 00:20:42.485 "name": "BaseBdev3", 00:20:42.485 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:42.485 "is_configured": true, 00:20:42.485 "data_offset": 2048, 00:20:42.485 "data_size": 63488 00:20:42.485 }, 00:20:42.485 { 00:20:42.485 "name": "BaseBdev4", 00:20:42.485 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:42.485 "is_configured": true, 00:20:42.485 "data_offset": 2048, 00:20:42.485 "data_size": 63488 00:20:42.485 } 00:20:42.485 ] 00:20:42.485 }' 00:20:42.485 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.485 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.048 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:43.048 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:43.049 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:43.049 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:43.049 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:43.049 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.049 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:43.306 "name": "raid_bdev1", 00:20:43.306 "uuid": "e217046e-5187-42b2-8e6a-ee345ec1f058", 00:20:43.306 "strip_size_kb": 0, 00:20:43.306 "state": "online", 00:20:43.306 "raid_level": "raid1", 00:20:43.306 "superblock": true, 00:20:43.306 "num_base_bdevs": 4, 00:20:43.306 "num_base_bdevs_discovered": 2, 00:20:43.306 "num_base_bdevs_operational": 2, 00:20:43.306 "base_bdevs_list": [ 00:20:43.306 { 00:20:43.306 "name": null, 00:20:43.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.306 "is_configured": false, 00:20:43.306 "data_offset": 2048, 00:20:43.306 "data_size": 63488 00:20:43.306 }, 00:20:43.306 { 00:20:43.306 "name": null, 00:20:43.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.306 "is_configured": false, 00:20:43.306 "data_offset": 2048, 00:20:43.306 "data_size": 63488 00:20:43.306 }, 00:20:43.306 { 00:20:43.306 "name": "BaseBdev3", 00:20:43.306 "uuid": "78fd8178-0db9-5c56-87c6-c9bccc8b5fd7", 00:20:43.306 "is_configured": true, 00:20:43.306 "data_offset": 2048, 00:20:43.306 "data_size": 63488 00:20:43.306 }, 00:20:43.306 { 00:20:43.306 "name": "BaseBdev4", 00:20:43.306 "uuid": "fcec988e-50bc-59d2-a033-18a1fd56d35c", 00:20:43.306 "is_configured": true, 00:20:43.306 "data_offset": 2048, 00:20:43.306 "data_size": 63488 00:20:43.306 } 00:20:43.306 ] 00:20:43.306 }' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 1868777 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 1868777 ']' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 1868777 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1868777 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1868777' 00:20:43.306 killing process with pid 1868777 00:20:43.306 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 1868777 00:20:43.306 Received shutdown signal, test time was about 60.000000 seconds 00:20:43.306 00:20:43.306 Latency(us) 00:20:43.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.306 =================================================================================================================== 00:20:43.307 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.307 [2024-07-15 10:28:07.998547] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.307 [2024-07-15 10:28:07.998619] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.307 [2024-07-15 10:28:07.998657] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.307 [2024-07-15 10:28:07.998665] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f5270 name raid_bdev1, state offline 00:20:43.307 10:28:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 1868777 00:20:43.307 [2024-07-15 10:28:08.035882] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:20:43.565 00:20:43.565 real 0m30.149s 00:20:43.565 user 0m42.663s 00:20:43.565 sys 0m5.522s 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.565 ************************************ 00:20:43.565 END TEST raid_rebuild_test_sb 00:20:43.565 ************************************ 00:20:43.565 10:28:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:43.565 10:28:08 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:43.565 10:28:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:20:43.565 10:28:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.565 10:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.565 ************************************ 00:20:43.565 START TEST raid_rebuild_test_io 00:20:43.565 ************************************ 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1874271 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1874271 /var/tmp/spdk-raid.sock 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 1874271 ']' 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:43.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.565 10:28:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:43.565 [2024-07-15 10:28:08.352379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:43.565 [2024-07-15 10:28:08.352423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874271 ] 00:20:43.565 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:43.565 Zero copy mechanism will not be used. 00:20:43.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.822 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:43.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.822 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:43.822 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.822 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:43.823 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:43.823 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:43.823 [2024-07-15 10:28:08.439524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.823 [2024-07-15 10:28:08.509188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.823 [2024-07-15 10:28:08.559224] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.823 [2024-07-15 10:28:08.559252] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.388 10:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.388 10:28:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:20:44.388 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:44.388 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:44.646 BaseBdev1_malloc 00:20:44.646 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:44.903 [2024-07-15 10:28:09.455331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:44.903 [2024-07-15 10:28:09.455366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.903 [2024-07-15 10:28:09.455381] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26bc5f0 00:20:44.903 [2024-07-15 10:28:09.455406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.903 [2024-07-15 10:28:09.456532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.903 [2024-07-15 10:28:09.456555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:44.903 BaseBdev1 00:20:44.903 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:44.903 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:44.903 BaseBdev2_malloc 00:20:44.903 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:45.161 [2024-07-15 10:28:09.783774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:45.161 [2024-07-15 10:28:09.783808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.161 [2024-07-15 10:28:09.783823] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2860130 00:20:45.161 [2024-07-15 10:28:09.783831] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.161 [2024-07-15 10:28:09.784881] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.161 [2024-07-15 10:28:09.784910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:45.161 BaseBdev2 00:20:45.161 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:45.161 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:45.161 BaseBdev3_malloc 00:20:45.420 10:28:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:45.420 [2024-07-15 10:28:10.112164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:45.420 [2024-07-15 10:28:10.112200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.420 [2024-07-15 10:28:10.112215] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2856420 00:20:45.420 [2024-07-15 10:28:10.112239] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.420 [2024-07-15 10:28:10.113357] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.420 [2024-07-15 10:28:10.113379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:45.420 BaseBdev3 00:20:45.420 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:20:45.420 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:45.679 BaseBdev4_malloc 00:20:45.679 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:45.679 [2024-07-15 10:28:10.440624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:45.679 [2024-07-15 10:28:10.440658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.679 [2024-07-15 10:28:10.440673] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2856d40 00:20:45.679 [2024-07-15 10:28:10.440682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.679 [2024-07-15 10:28:10.441672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.679 [2024-07-15 10:28:10.441694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:45.679 BaseBdev4 00:20:45.679 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:45.937 spare_malloc 00:20:45.937 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:46.197 spare_delay 00:20:46.197 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:46.197 [2024-07-15 10:28:10.953602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:46.197 [2024-07-15 10:28:10.953634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.197 [2024-07-15 10:28:10.953650] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26b5db0 00:20:46.197 [2024-07-15 10:28:10.953674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.197 [2024-07-15 10:28:10.954746] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.197 [2024-07-15 10:28:10.954769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:46.197 spare 00:20:46.197 10:28:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:46.486 [2024-07-15 10:28:11.122053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.486 [2024-07-15 10:28:11.122891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.486 [2024-07-15 10:28:11.122950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.486 [2024-07-15 10:28:11.122980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:46.486 [2024-07-15 10:28:11.123033] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26b85b0 00:20:46.486 [2024-07-15 10:28:11.123039] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:46.486 [2024-07-15 10:28:11.123184] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26bb380 00:20:46.486 [2024-07-15 10:28:11.123289] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26b85b0 00:20:46.486 [2024-07-15 10:28:11.123296] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26b85b0 00:20:46.486 [2024-07-15 10:28:11.123373] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.486 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.747 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.747 "name": "raid_bdev1", 00:20:46.747 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:46.747 "strip_size_kb": 0, 00:20:46.747 "state": "online", 00:20:46.747 "raid_level": "raid1", 00:20:46.747 "superblock": false, 00:20:46.747 "num_base_bdevs": 4, 00:20:46.747 "num_base_bdevs_discovered": 4, 00:20:46.747 "num_base_bdevs_operational": 4, 00:20:46.747 "base_bdevs_list": [ 00:20:46.747 { 00:20:46.747 "name": "BaseBdev1", 00:20:46.747 "uuid": "1b677c31-8ff4-5e98-be86-d2de6ffe3430", 00:20:46.747 "is_configured": true, 00:20:46.747 "data_offset": 0, 00:20:46.747 "data_size": 65536 00:20:46.747 }, 00:20:46.747 { 00:20:46.747 "name": "BaseBdev2", 00:20:46.747 "uuid": "9bff82ec-1a73-5088-8952-9d6534a0ccdb", 00:20:46.747 "is_configured": true, 00:20:46.747 "data_offset": 0, 00:20:46.747 "data_size": 65536 00:20:46.747 }, 00:20:46.747 { 00:20:46.747 "name": "BaseBdev3", 00:20:46.747 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:46.747 "is_configured": true, 00:20:46.747 "data_offset": 0, 00:20:46.747 "data_size": 65536 00:20:46.747 }, 00:20:46.747 { 00:20:46.747 "name": "BaseBdev4", 00:20:46.747 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:46.747 "is_configured": true, 00:20:46.747 "data_offset": 0, 00:20:46.747 "data_size": 65536 00:20:46.747 } 00:20:46.747 ] 00:20:46.747 }' 00:20:46.747 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.747 10:28:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:47.311 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:47.311 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:20:47.311 [2024-07-15 10:28:11.952487] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.311 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:20:47.311 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:47.311 10:28:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.569 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:20:47.569 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:20:47.569 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:47.569 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:47.569 [2024-07-15 10:28:12.230909] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26bb8e0 00:20:47.569 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:47.569 Zero copy mechanism will not be used. 00:20:47.569 Running I/O for 60 seconds... 00:20:47.570 [2024-07-15 10:28:12.314381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.570 [2024-07-15 10:28:12.314539] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x26bb8e0 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.570 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.828 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.828 "name": "raid_bdev1", 00:20:47.828 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:47.828 "strip_size_kb": 0, 00:20:47.828 "state": "online", 00:20:47.828 "raid_level": "raid1", 00:20:47.828 "superblock": false, 00:20:47.828 "num_base_bdevs": 4, 00:20:47.828 "num_base_bdevs_discovered": 3, 00:20:47.828 "num_base_bdevs_operational": 3, 00:20:47.828 "base_bdevs_list": [ 00:20:47.828 { 00:20:47.828 "name": null, 00:20:47.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.828 "is_configured": false, 00:20:47.828 "data_offset": 0, 00:20:47.828 "data_size": 65536 00:20:47.828 }, 00:20:47.828 { 00:20:47.828 "name": "BaseBdev2", 00:20:47.828 "uuid": "9bff82ec-1a73-5088-8952-9d6534a0ccdb", 00:20:47.828 "is_configured": true, 00:20:47.828 "data_offset": 0, 00:20:47.828 "data_size": 65536 00:20:47.828 }, 00:20:47.828 { 00:20:47.828 "name": "BaseBdev3", 00:20:47.828 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:47.828 "is_configured": true, 00:20:47.828 "data_offset": 0, 00:20:47.828 "data_size": 65536 00:20:47.828 }, 00:20:47.828 { 00:20:47.828 "name": "BaseBdev4", 00:20:47.828 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:47.828 "is_configured": true, 00:20:47.828 "data_offset": 0, 00:20:47.828 "data_size": 65536 00:20:47.828 } 00:20:47.828 ] 00:20:47.828 }' 00:20:47.828 10:28:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.828 10:28:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:48.394 10:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:48.653 [2024-07-15 10:28:13.202550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.653 10:28:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:20:48.653 [2024-07-15 10:28:13.252199] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2753950 00:20:48.653 [2024-07-15 10:28:13.254024] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:48.653 [2024-07-15 10:28:13.375718] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:48.653 [2024-07-15 10:28:13.376814] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:48.911 [2024-07-15 10:28:13.583918] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:48.911 [2024-07-15 10:28:13.584034] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:49.478 [2024-07-15 10:28:14.251612] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:49.478 [2024-07-15 10:28:14.251814] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.478 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.736 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:49.736 "name": "raid_bdev1", 00:20:49.736 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:49.736 "strip_size_kb": 0, 00:20:49.736 "state": "online", 00:20:49.736 "raid_level": "raid1", 00:20:49.736 "superblock": false, 00:20:49.736 "num_base_bdevs": 4, 00:20:49.736 "num_base_bdevs_discovered": 4, 00:20:49.736 "num_base_bdevs_operational": 4, 00:20:49.736 "process": { 00:20:49.736 "type": "rebuild", 00:20:49.736 "target": "spare", 00:20:49.736 "progress": { 00:20:49.736 "blocks": 14336, 00:20:49.736 "percent": 21 00:20:49.736 } 00:20:49.736 }, 00:20:49.736 "base_bdevs_list": [ 00:20:49.736 { 00:20:49.736 "name": "spare", 00:20:49.736 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:49.736 "is_configured": true, 00:20:49.736 "data_offset": 0, 00:20:49.736 "data_size": 65536 00:20:49.736 }, 00:20:49.736 { 00:20:49.736 "name": "BaseBdev2", 00:20:49.736 "uuid": "9bff82ec-1a73-5088-8952-9d6534a0ccdb", 00:20:49.736 "is_configured": true, 00:20:49.736 "data_offset": 0, 00:20:49.736 "data_size": 65536 00:20:49.736 }, 00:20:49.736 { 00:20:49.736 "name": "BaseBdev3", 00:20:49.736 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:49.736 "is_configured": true, 00:20:49.736 "data_offset": 0, 00:20:49.736 "data_size": 65536 00:20:49.736 }, 00:20:49.736 { 00:20:49.736 "name": "BaseBdev4", 00:20:49.736 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:49.736 "is_configured": true, 00:20:49.736 "data_offset": 0, 00:20:49.736 "data_size": 65536 00:20:49.736 } 00:20:49.736 ] 00:20:49.736 }' 00:20:49.736 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:49.736 [2024-07-15 10:28:14.461354] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:49.736 [2024-07-15 10:28:14.461584] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:49.736 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.736 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:49.736 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.736 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:49.995 [2024-07-15 10:28:14.668305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.995 [2024-07-15 10:28:14.704851] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:49.995 [2024-07-15 10:28:14.705123] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:50.254 [2024-07-15 10:28:14.812148] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:50.254 [2024-07-15 10:28:14.815555] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.254 [2024-07-15 10:28:14.815575] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:50.254 [2024-07-15 10:28:14.815581] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:50.254 [2024-07-15 10:28:14.831307] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x26bb8e0 00:20:50.254 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:50.254 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:50.254 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:50.254 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.254 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.254 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.255 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.255 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.255 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.255 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.255 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.255 10:28:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.513 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.513 "name": "raid_bdev1", 00:20:50.513 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:50.513 "strip_size_kb": 0, 00:20:50.513 "state": "online", 00:20:50.513 "raid_level": "raid1", 00:20:50.513 "superblock": false, 00:20:50.513 "num_base_bdevs": 4, 00:20:50.513 "num_base_bdevs_discovered": 3, 00:20:50.513 "num_base_bdevs_operational": 3, 00:20:50.513 "base_bdevs_list": [ 00:20:50.513 { 00:20:50.513 "name": null, 00:20:50.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.513 "is_configured": false, 00:20:50.513 "data_offset": 0, 00:20:50.513 "data_size": 65536 00:20:50.513 }, 00:20:50.513 { 00:20:50.513 "name": "BaseBdev2", 00:20:50.513 "uuid": "9bff82ec-1a73-5088-8952-9d6534a0ccdb", 00:20:50.513 "is_configured": true, 00:20:50.513 "data_offset": 0, 00:20:50.513 "data_size": 65536 00:20:50.513 }, 00:20:50.513 { 00:20:50.513 "name": "BaseBdev3", 00:20:50.513 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:50.513 "is_configured": true, 00:20:50.513 "data_offset": 0, 00:20:50.513 "data_size": 65536 00:20:50.514 }, 00:20:50.514 { 00:20:50.514 "name": "BaseBdev4", 00:20:50.514 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:50.514 "is_configured": true, 00:20:50.514 "data_offset": 0, 00:20:50.514 "data_size": 65536 00:20:50.514 } 00:20:50.514 ] 00:20:50.514 }' 00:20:50.514 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.514 10:28:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:50.772 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.772 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:50.772 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:50.772 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:50.772 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:50.772 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.031 "name": "raid_bdev1", 00:20:51.031 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:51.031 "strip_size_kb": 0, 00:20:51.031 "state": "online", 00:20:51.031 "raid_level": "raid1", 00:20:51.031 "superblock": false, 00:20:51.031 "num_base_bdevs": 4, 00:20:51.031 "num_base_bdevs_discovered": 3, 00:20:51.031 "num_base_bdevs_operational": 3, 00:20:51.031 "base_bdevs_list": [ 00:20:51.031 { 00:20:51.031 "name": null, 00:20:51.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.031 "is_configured": false, 00:20:51.031 "data_offset": 0, 00:20:51.031 "data_size": 65536 00:20:51.031 }, 00:20:51.031 { 00:20:51.031 "name": "BaseBdev2", 00:20:51.031 "uuid": "9bff82ec-1a73-5088-8952-9d6534a0ccdb", 00:20:51.031 "is_configured": true, 00:20:51.031 "data_offset": 0, 00:20:51.031 "data_size": 65536 00:20:51.031 }, 00:20:51.031 { 00:20:51.031 "name": "BaseBdev3", 00:20:51.031 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:51.031 "is_configured": true, 00:20:51.031 "data_offset": 0, 00:20:51.031 "data_size": 65536 00:20:51.031 }, 00:20:51.031 { 00:20:51.031 "name": "BaseBdev4", 00:20:51.031 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:51.031 "is_configured": true, 00:20:51.031 "data_offset": 0, 00:20:51.031 "data_size": 65536 00:20:51.031 } 00:20:51.031 ] 00:20:51.031 }' 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.031 10:28:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.289 [2024-07-15 10:28:15.981696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.289 [2024-07-15 10:28:16.013884] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26e2890 00:20:51.289 [2024-07-15 10:28:16.015013] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:51.289 10:28:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:51.548 [2024-07-15 10:28:16.128536] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:51.548 [2024-07-15 10:28:16.129513] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:51.548 [2024-07-15 10:28:16.331346] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:51.548 [2024-07-15 10:28:16.331655] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:52.113 [2024-07-15 10:28:16.666701] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:52.113 [2024-07-15 10:28:16.667725] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:52.113 [2024-07-15 10:28:16.894792] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.372 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:52.630 "name": "raid_bdev1", 00:20:52.630 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:52.630 "strip_size_kb": 0, 00:20:52.630 "state": "online", 00:20:52.630 "raid_level": "raid1", 00:20:52.630 "superblock": false, 00:20:52.630 "num_base_bdevs": 4, 00:20:52.630 "num_base_bdevs_discovered": 4, 00:20:52.630 "num_base_bdevs_operational": 4, 00:20:52.630 "process": { 00:20:52.630 "type": "rebuild", 00:20:52.630 "target": "spare", 00:20:52.630 "progress": { 00:20:52.630 "blocks": 12288, 00:20:52.630 "percent": 18 00:20:52.630 } 00:20:52.630 }, 00:20:52.630 "base_bdevs_list": [ 00:20:52.630 { 00:20:52.630 "name": "spare", 00:20:52.630 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:52.630 "is_configured": true, 00:20:52.630 "data_offset": 0, 00:20:52.630 "data_size": 65536 00:20:52.630 }, 00:20:52.630 { 00:20:52.630 "name": "BaseBdev2", 00:20:52.630 "uuid": "9bff82ec-1a73-5088-8952-9d6534a0ccdb", 00:20:52.630 "is_configured": true, 00:20:52.630 "data_offset": 0, 00:20:52.630 "data_size": 65536 00:20:52.630 }, 00:20:52.630 { 00:20:52.630 "name": "BaseBdev3", 00:20:52.630 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:52.630 "is_configured": true, 00:20:52.630 "data_offset": 0, 00:20:52.630 "data_size": 65536 00:20:52.630 }, 00:20:52.630 { 00:20:52.630 "name": "BaseBdev4", 00:20:52.630 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:52.630 "is_configured": true, 00:20:52.630 "data_offset": 0, 00:20:52.630 "data_size": 65536 00:20:52.630 } 00:20:52.630 ] 00:20:52.630 }' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:20:52.630 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:52.630 [2024-07-15 10:28:17.357393] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:52.889 [2024-07-15 10:28:17.428225] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:52.889 [2024-07-15 10:28:17.505234] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x26bb8e0 00:20:52.889 [2024-07-15 10:28:17.505252] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x26e2890 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.889 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.889 [2024-07-15 10:28:17.633620] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:53.148 "name": "raid_bdev1", 00:20:53.148 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:53.148 "strip_size_kb": 0, 00:20:53.148 "state": "online", 00:20:53.148 "raid_level": "raid1", 00:20:53.148 "superblock": false, 00:20:53.148 "num_base_bdevs": 4, 00:20:53.148 "num_base_bdevs_discovered": 3, 00:20:53.148 "num_base_bdevs_operational": 3, 00:20:53.148 "process": { 00:20:53.148 "type": "rebuild", 00:20:53.148 "target": "spare", 00:20:53.148 "progress": { 00:20:53.148 "blocks": 20480, 00:20:53.148 "percent": 31 00:20:53.148 } 00:20:53.148 }, 00:20:53.148 "base_bdevs_list": [ 00:20:53.148 { 00:20:53.148 "name": "spare", 00:20:53.148 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:53.148 "is_configured": true, 00:20:53.148 "data_offset": 0, 00:20:53.148 "data_size": 65536 00:20:53.148 }, 00:20:53.148 { 00:20:53.148 "name": null, 00:20:53.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.148 "is_configured": false, 00:20:53.148 "data_offset": 0, 00:20:53.148 "data_size": 65536 00:20:53.148 }, 00:20:53.148 { 00:20:53.148 "name": "BaseBdev3", 00:20:53.148 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:53.148 "is_configured": true, 00:20:53.148 "data_offset": 0, 00:20:53.148 "data_size": 65536 00:20:53.148 }, 00:20:53.148 { 00:20:53.148 "name": "BaseBdev4", 00:20:53.148 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:53.148 "is_configured": true, 00:20:53.148 "data_offset": 0, 00:20:53.148 "data_size": 65536 00:20:53.148 } 00:20:53.148 ] 00:20:53.148 }' 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:53.148 [2024-07-15 10:28:17.758708] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=718 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.148 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.407 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:53.407 "name": "raid_bdev1", 00:20:53.407 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:53.407 "strip_size_kb": 0, 00:20:53.407 "state": "online", 00:20:53.407 "raid_level": "raid1", 00:20:53.407 "superblock": false, 00:20:53.407 "num_base_bdevs": 4, 00:20:53.407 "num_base_bdevs_discovered": 3, 00:20:53.407 "num_base_bdevs_operational": 3, 00:20:53.407 "process": { 00:20:53.407 "type": "rebuild", 00:20:53.407 "target": "spare", 00:20:53.407 "progress": { 00:20:53.407 "blocks": 24576, 00:20:53.407 "percent": 37 00:20:53.407 } 00:20:53.407 }, 00:20:53.407 "base_bdevs_list": [ 00:20:53.407 { 00:20:53.407 "name": "spare", 00:20:53.407 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:53.407 "is_configured": true, 00:20:53.407 "data_offset": 0, 00:20:53.407 "data_size": 65536 00:20:53.407 }, 00:20:53.407 { 00:20:53.407 "name": null, 00:20:53.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.407 "is_configured": false, 00:20:53.407 "data_offset": 0, 00:20:53.407 "data_size": 65536 00:20:53.407 }, 00:20:53.407 { 00:20:53.407 "name": "BaseBdev3", 00:20:53.407 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:53.407 "is_configured": true, 00:20:53.407 "data_offset": 0, 00:20:53.407 "data_size": 65536 00:20:53.407 }, 00:20:53.407 { 00:20:53.407 "name": "BaseBdev4", 00:20:53.407 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:53.407 "is_configured": true, 00:20:53.407 "data_offset": 0, 00:20:53.407 "data_size": 65536 00:20:53.407 } 00:20:53.407 ] 00:20:53.407 }' 00:20:53.407 10:28:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:53.407 10:28:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.407 10:28:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:53.407 [2024-07-15 10:28:18.004736] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:53.407 10:28:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.407 10:28:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:53.666 [2024-07-15 10:28:18.219150] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:54.230 [2024-07-15 10:28:18.793710] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:54.230 [2024-07-15 10:28:18.908101] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:54.230 [2024-07-15 10:28:18.908457] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:54.500 "name": "raid_bdev1", 00:20:54.500 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:54.500 "strip_size_kb": 0, 00:20:54.500 "state": "online", 00:20:54.500 "raid_level": "raid1", 00:20:54.500 "superblock": false, 00:20:54.500 "num_base_bdevs": 4, 00:20:54.500 "num_base_bdevs_discovered": 3, 00:20:54.500 "num_base_bdevs_operational": 3, 00:20:54.500 "process": { 00:20:54.500 "type": "rebuild", 00:20:54.500 "target": "spare", 00:20:54.500 "progress": { 00:20:54.500 "blocks": 43008, 00:20:54.500 "percent": 65 00:20:54.500 } 00:20:54.500 }, 00:20:54.500 "base_bdevs_list": [ 00:20:54.500 { 00:20:54.500 "name": "spare", 00:20:54.500 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:54.500 "is_configured": true, 00:20:54.500 "data_offset": 0, 00:20:54.500 "data_size": 65536 00:20:54.500 }, 00:20:54.500 { 00:20:54.500 "name": null, 00:20:54.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.500 "is_configured": false, 00:20:54.500 "data_offset": 0, 00:20:54.500 "data_size": 65536 00:20:54.500 }, 00:20:54.500 { 00:20:54.500 "name": "BaseBdev3", 00:20:54.500 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:54.500 "is_configured": true, 00:20:54.500 "data_offset": 0, 00:20:54.500 "data_size": 65536 00:20:54.500 }, 00:20:54.500 { 00:20:54.500 "name": "BaseBdev4", 00:20:54.500 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:54.500 "is_configured": true, 00:20:54.500 "data_offset": 0, 00:20:54.500 "data_size": 65536 00:20:54.500 } 00:20:54.500 ] 00:20:54.500 }' 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.500 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:54.760 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.760 10:28:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:54.760 [2024-07-15 10:28:19.373848] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:55.695 [2024-07-15 10:28:20.138237] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.695 "name": "raid_bdev1", 00:20:55.695 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:55.695 "strip_size_kb": 0, 00:20:55.695 "state": "online", 00:20:55.695 "raid_level": "raid1", 00:20:55.695 "superblock": false, 00:20:55.695 "num_base_bdevs": 4, 00:20:55.695 "num_base_bdevs_discovered": 3, 00:20:55.695 "num_base_bdevs_operational": 3, 00:20:55.695 "process": { 00:20:55.695 "type": "rebuild", 00:20:55.695 "target": "spare", 00:20:55.695 "progress": { 00:20:55.695 "blocks": 61440, 00:20:55.695 "percent": 93 00:20:55.695 } 00:20:55.695 }, 00:20:55.695 "base_bdevs_list": [ 00:20:55.695 { 00:20:55.695 "name": "spare", 00:20:55.695 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:55.695 "is_configured": true, 00:20:55.695 "data_offset": 0, 00:20:55.695 "data_size": 65536 00:20:55.695 }, 00:20:55.695 { 00:20:55.695 "name": null, 00:20:55.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.695 "is_configured": false, 00:20:55.695 "data_offset": 0, 00:20:55.695 "data_size": 65536 00:20:55.695 }, 00:20:55.695 { 00:20:55.695 "name": "BaseBdev3", 00:20:55.695 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:55.695 "is_configured": true, 00:20:55.695 "data_offset": 0, 00:20:55.695 "data_size": 65536 00:20:55.695 }, 00:20:55.695 { 00:20:55.695 "name": "BaseBdev4", 00:20:55.695 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:55.695 "is_configured": true, 00:20:55.695 "data_offset": 0, 00:20:55.695 "data_size": 65536 00:20:55.695 } 00:20:55.695 ] 00:20:55.695 }' 00:20:55.695 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:55.953 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.953 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:55.953 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.953 10:28:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:20:55.953 [2024-07-15 10:28:20.572332] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:55.953 [2024-07-15 10:28:20.672561] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:55.953 [2024-07-15 10:28:20.679872] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.887 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.146 "name": "raid_bdev1", 00:20:57.146 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:57.146 "strip_size_kb": 0, 00:20:57.146 "state": "online", 00:20:57.146 "raid_level": "raid1", 00:20:57.146 "superblock": false, 00:20:57.146 "num_base_bdevs": 4, 00:20:57.146 "num_base_bdevs_discovered": 3, 00:20:57.146 "num_base_bdevs_operational": 3, 00:20:57.146 "base_bdevs_list": [ 00:20:57.146 { 00:20:57.146 "name": "spare", 00:20:57.146 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:57.146 "is_configured": true, 00:20:57.146 "data_offset": 0, 00:20:57.146 "data_size": 65536 00:20:57.146 }, 00:20:57.146 { 00:20:57.146 "name": null, 00:20:57.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.146 "is_configured": false, 00:20:57.146 "data_offset": 0, 00:20:57.146 "data_size": 65536 00:20:57.146 }, 00:20:57.146 { 00:20:57.146 "name": "BaseBdev3", 00:20:57.146 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:57.146 "is_configured": true, 00:20:57.146 "data_offset": 0, 00:20:57.146 "data_size": 65536 00:20:57.146 }, 00:20:57.146 { 00:20:57.146 "name": "BaseBdev4", 00:20:57.146 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:57.146 "is_configured": true, 00:20:57.146 "data_offset": 0, 00:20:57.146 "data_size": 65536 00:20:57.146 } 00:20:57.146 ] 00:20:57.146 }' 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.146 10:28:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.405 "name": "raid_bdev1", 00:20:57.405 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:57.405 "strip_size_kb": 0, 00:20:57.405 "state": "online", 00:20:57.405 "raid_level": "raid1", 00:20:57.405 "superblock": false, 00:20:57.405 "num_base_bdevs": 4, 00:20:57.405 "num_base_bdevs_discovered": 3, 00:20:57.405 "num_base_bdevs_operational": 3, 00:20:57.405 "base_bdevs_list": [ 00:20:57.405 { 00:20:57.405 "name": "spare", 00:20:57.405 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:57.405 "is_configured": true, 00:20:57.405 "data_offset": 0, 00:20:57.405 "data_size": 65536 00:20:57.405 }, 00:20:57.405 { 00:20:57.405 "name": null, 00:20:57.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.405 "is_configured": false, 00:20:57.405 "data_offset": 0, 00:20:57.405 "data_size": 65536 00:20:57.405 }, 00:20:57.405 { 00:20:57.405 "name": "BaseBdev3", 00:20:57.405 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:57.405 "is_configured": true, 00:20:57.405 "data_offset": 0, 00:20:57.405 "data_size": 65536 00:20:57.405 }, 00:20:57.405 { 00:20:57.405 "name": "BaseBdev4", 00:20:57.405 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:57.405 "is_configured": true, 00:20:57.405 "data_offset": 0, 00:20:57.405 "data_size": 65536 00:20:57.405 } 00:20:57.405 ] 00:20:57.405 }' 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.405 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.406 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.664 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.664 "name": "raid_bdev1", 00:20:57.664 "uuid": "49cc0852-aedf-4619-a34a-4e6fa3e5c75b", 00:20:57.664 "strip_size_kb": 0, 00:20:57.664 "state": "online", 00:20:57.664 "raid_level": "raid1", 00:20:57.664 "superblock": false, 00:20:57.664 "num_base_bdevs": 4, 00:20:57.664 "num_base_bdevs_discovered": 3, 00:20:57.664 "num_base_bdevs_operational": 3, 00:20:57.664 "base_bdevs_list": [ 00:20:57.664 { 00:20:57.664 "name": "spare", 00:20:57.664 "uuid": "f72d6cc1-1bfe-52a9-b137-ecc82a5755e1", 00:20:57.664 "is_configured": true, 00:20:57.664 "data_offset": 0, 00:20:57.664 "data_size": 65536 00:20:57.664 }, 00:20:57.664 { 00:20:57.664 "name": null, 00:20:57.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.664 "is_configured": false, 00:20:57.664 "data_offset": 0, 00:20:57.664 "data_size": 65536 00:20:57.664 }, 00:20:57.664 { 00:20:57.664 "name": "BaseBdev3", 00:20:57.664 "uuid": "21f1b964-40a8-522a-b21c-78f06d226220", 00:20:57.664 "is_configured": true, 00:20:57.664 "data_offset": 0, 00:20:57.664 "data_size": 65536 00:20:57.664 }, 00:20:57.664 { 00:20:57.664 "name": "BaseBdev4", 00:20:57.664 "uuid": "a0df9412-2086-53d4-8763-7703d721e107", 00:20:57.664 "is_configured": true, 00:20:57.664 "data_offset": 0, 00:20:57.664 "data_size": 65536 00:20:57.664 } 00:20:57.664 ] 00:20:57.664 }' 00:20:57.664 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.664 10:28:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:58.231 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:58.231 [2024-07-15 10:28:22.897603] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.231 [2024-07-15 10:28:22.897629] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:58.231 00:20:58.231 Latency(us) 00:20:58.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.231 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:58.231 raid_bdev1 : 10.68 101.28 303.84 0.00 0.00 14320.39 244.12 109051.90 00:20:58.231 =================================================================================================================== 00:20:58.231 Total : 101.28 303.84 0.00 0.00 14320.39 244.12 109051.90 00:20:58.231 [2024-07-15 10:28:22.944341] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.231 [2024-07-15 10:28:22.944360] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:58.231 [2024-07-15 10:28:22.944418] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:58.231 [2024-07-15 10:28:22.944426] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26b85b0 name raid_bdev1, state offline 00:20:58.231 0 00:20:58.231 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:20:58.231 10:28:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.490 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:58.748 /dev/nbd0 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:58.748 1+0 records in 00:20:58.748 1+0 records out 00:20:58.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241476 s, 17.0 MB/s 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:58.748 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:58.748 /dev/nbd1 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:59.006 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.006 1+0 records in 00:20:59.006 1+0 records out 00:20:59.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270009 s, 15.2 MB/s 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.007 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:59.270 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.271 10:28:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:59.271 /dev/nbd1 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.271 1+0 records in 00:20:59.271 1+0 records out 00:20:59.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263428 s, 15.5 MB/s 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.271 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:59.587 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 1874271 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 1874271 ']' 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 1874271 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1874271 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1874271' 00:20:59.886 killing process with pid 1874271 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 1874271 00:20:59.886 Received shutdown signal, test time was about 12.270993 seconds 00:20:59.886 00:20:59.886 Latency(us) 00:20:59.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.886 =================================================================================================================== 00:20:59.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.886 [2024-07-15 10:28:24.533444] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.886 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 1874271 00:20:59.886 [2024-07-15 10:28:24.565852] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:00.146 00:21:00.146 real 0m16.450s 00:21:00.146 user 0m24.208s 00:21:00.146 sys 0m2.868s 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.146 ************************************ 00:21:00.146 END TEST raid_rebuild_test_io 00:21:00.146 ************************************ 00:21:00.146 10:28:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:00.146 10:28:24 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:21:00.146 10:28:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:00.146 10:28:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.146 10:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.146 ************************************ 00:21:00.146 START TEST raid_rebuild_test_sb_io 00:21:00.146 ************************************ 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev3 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # echo BaseBdev4 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=1877247 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 1877247 /var/tmp/spdk-raid.sock 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 1877247 ']' 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:00.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.146 10:28:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:00.146 [2024-07-15 10:28:24.866933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:00.146 [2024-07-15 10:28:24.866976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877247 ] 00:21:00.146 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:00.146 Zero copy mechanism will not be used. 00:21:00.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.146 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:00.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.146 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:00.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.146 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:00.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.146 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:00.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.146 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:00.146 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.146 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:00.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:00.147 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:00.406 [2024-07-15 10:28:24.957116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.406 [2024-07-15 10:28:25.029823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.406 [2024-07-15 10:28:25.085530] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.406 [2024-07-15 10:28:25.085549] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:00.974 10:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.974 10:28:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:21:00.974 10:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:00.974 10:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:01.233 BaseBdev1_malloc 00:21:01.233 10:28:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:01.234 [2024-07-15 10:28:25.997272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:01.234 [2024-07-15 10:28:25.997306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.234 [2024-07-15 10:28:25.997322] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e1c5f0 00:21:01.234 [2024-07-15 10:28:25.997347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.234 [2024-07-15 10:28:25.998467] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.234 [2024-07-15 10:28:25.998489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:01.234 BaseBdev1 00:21:01.234 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:01.234 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:01.493 BaseBdev2_malloc 00:21:01.493 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:01.752 [2024-07-15 10:28:26.329889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:01.752 [2024-07-15 10:28:26.329933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.752 [2024-07-15 10:28:26.329947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fc0130 00:21:01.752 [2024-07-15 10:28:26.329956] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.752 [2024-07-15 10:28:26.331009] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.752 [2024-07-15 10:28:26.331033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:01.752 BaseBdev2 00:21:01.752 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:01.752 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:01.752 BaseBdev3_malloc 00:21:01.752 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:02.011 [2024-07-15 10:28:26.642338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:02.011 [2024-07-15 10:28:26.642372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.011 [2024-07-15 10:28:26.642385] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb6420 00:21:02.011 [2024-07-15 10:28:26.642410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.011 [2024-07-15 10:28:26.643412] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.011 [2024-07-15 10:28:26.643433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:02.011 BaseBdev3 00:21:02.011 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:02.011 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:02.270 BaseBdev4_malloc 00:21:02.270 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:02.270 [2024-07-15 10:28:26.982846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:02.270 [2024-07-15 10:28:26.982879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.270 [2024-07-15 10:28:26.982898] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb6d40 00:21:02.270 [2024-07-15 10:28:26.982911] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.270 [2024-07-15 10:28:26.983910] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.270 [2024-07-15 10:28:26.983932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:02.270 BaseBdev4 00:21:02.270 10:28:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:02.529 spare_malloc 00:21:02.529 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:02.529 spare_delay 00:21:02.788 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:02.788 [2024-07-15 10:28:27.471625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:02.788 [2024-07-15 10:28:27.471657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.788 [2024-07-15 10:28:27.471672] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e15db0 00:21:02.788 [2024-07-15 10:28:27.471696] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.788 [2024-07-15 10:28:27.472732] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.788 [2024-07-15 10:28:27.472754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:02.788 spare 00:21:02.788 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:03.048 [2024-07-15 10:28:27.640086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.048 [2024-07-15 10:28:27.640998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.048 [2024-07-15 10:28:27.641034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:03.048 [2024-07-15 10:28:27.641063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:03.048 [2024-07-15 10:28:27.641187] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e185b0 00:21:03.048 [2024-07-15 10:28:27.641194] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:03.048 [2024-07-15 10:28:27.641320] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e18580 00:21:03.048 [2024-07-15 10:28:27.641416] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e185b0 00:21:03.048 [2024-07-15 10:28:27.641422] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e185b0 00:21:03.048 [2024-07-15 10:28:27.641482] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.048 "name": "raid_bdev1", 00:21:03.048 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:03.048 "strip_size_kb": 0, 00:21:03.048 "state": "online", 00:21:03.048 "raid_level": "raid1", 00:21:03.048 "superblock": true, 00:21:03.048 "num_base_bdevs": 4, 00:21:03.048 "num_base_bdevs_discovered": 4, 00:21:03.048 "num_base_bdevs_operational": 4, 00:21:03.048 "base_bdevs_list": [ 00:21:03.048 { 00:21:03.048 "name": "BaseBdev1", 00:21:03.048 "uuid": "5be532d8-2dbe-5994-9dfa-3485268791b1", 00:21:03.048 "is_configured": true, 00:21:03.048 "data_offset": 2048, 00:21:03.048 "data_size": 63488 00:21:03.048 }, 00:21:03.048 { 00:21:03.048 "name": "BaseBdev2", 00:21:03.048 "uuid": "97033386-ad3b-5391-b0f2-87d35c547224", 00:21:03.048 "is_configured": true, 00:21:03.048 "data_offset": 2048, 00:21:03.048 "data_size": 63488 00:21:03.048 }, 00:21:03.048 { 00:21:03.048 "name": "BaseBdev3", 00:21:03.048 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:03.048 "is_configured": true, 00:21:03.048 "data_offset": 2048, 00:21:03.048 "data_size": 63488 00:21:03.048 }, 00:21:03.048 { 00:21:03.048 "name": "BaseBdev4", 00:21:03.048 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:03.048 "is_configured": true, 00:21:03.048 "data_offset": 2048, 00:21:03.048 "data_size": 63488 00:21:03.048 } 00:21:03.048 ] 00:21:03.048 }' 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.048 10:28:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:03.616 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:03.616 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:03.902 [2024-07-15 10:28:28.478419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:03.902 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:04.162 [2024-07-15 10:28:28.760813] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb5490 00:21:04.162 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:04.162 Zero copy mechanism will not be used. 00:21:04.162 Running I/O for 60 seconds... 00:21:04.162 [2024-07-15 10:28:28.830389] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:04.162 [2024-07-15 10:28:28.830580] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fb5490 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.162 10:28:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.421 10:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.421 "name": "raid_bdev1", 00:21:04.421 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:04.421 "strip_size_kb": 0, 00:21:04.421 "state": "online", 00:21:04.421 "raid_level": "raid1", 00:21:04.421 "superblock": true, 00:21:04.421 "num_base_bdevs": 4, 00:21:04.421 "num_base_bdevs_discovered": 3, 00:21:04.421 "num_base_bdevs_operational": 3, 00:21:04.421 "base_bdevs_list": [ 00:21:04.421 { 00:21:04.421 "name": null, 00:21:04.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.421 "is_configured": false, 00:21:04.421 "data_offset": 2048, 00:21:04.421 "data_size": 63488 00:21:04.421 }, 00:21:04.421 { 00:21:04.421 "name": "BaseBdev2", 00:21:04.421 "uuid": "97033386-ad3b-5391-b0f2-87d35c547224", 00:21:04.421 "is_configured": true, 00:21:04.421 "data_offset": 2048, 00:21:04.421 "data_size": 63488 00:21:04.421 }, 00:21:04.421 { 00:21:04.421 "name": "BaseBdev3", 00:21:04.421 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:04.421 "is_configured": true, 00:21:04.421 "data_offset": 2048, 00:21:04.421 "data_size": 63488 00:21:04.421 }, 00:21:04.421 { 00:21:04.421 "name": "BaseBdev4", 00:21:04.421 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:04.421 "is_configured": true, 00:21:04.421 "data_offset": 2048, 00:21:04.421 "data_size": 63488 00:21:04.421 } 00:21:04.421 ] 00:21:04.421 }' 00:21:04.421 10:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.421 10:28:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:04.990 10:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.990 [2024-07-15 10:28:29.669810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.990 [2024-07-15 10:28:29.716230] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb3a50 00:21:04.990 10:28:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:04.990 [2024-07-15 10:28:29.717883] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.249 [2024-07-15 10:28:29.826143] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:05.249 [2024-07-15 10:28:29.826454] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:05.249 [2024-07-15 10:28:29.947957] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:05.249 [2024-07-15 10:28:29.948473] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:06.187 [2024-07-15 10:28:30.610964] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:06.187 [2024-07-15 10:28:30.725041] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:06.187 "name": "raid_bdev1", 00:21:06.187 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:06.187 "strip_size_kb": 0, 00:21:06.187 "state": "online", 00:21:06.187 "raid_level": "raid1", 00:21:06.187 "superblock": true, 00:21:06.187 "num_base_bdevs": 4, 00:21:06.187 "num_base_bdevs_discovered": 4, 00:21:06.187 "num_base_bdevs_operational": 4, 00:21:06.187 "process": { 00:21:06.187 "type": "rebuild", 00:21:06.187 "target": "spare", 00:21:06.187 "progress": { 00:21:06.187 "blocks": 16384, 00:21:06.187 "percent": 25 00:21:06.187 } 00:21:06.187 }, 00:21:06.187 "base_bdevs_list": [ 00:21:06.187 { 00:21:06.187 "name": "spare", 00:21:06.187 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:06.187 "is_configured": true, 00:21:06.187 "data_offset": 2048, 00:21:06.187 "data_size": 63488 00:21:06.187 }, 00:21:06.187 { 00:21:06.187 "name": "BaseBdev2", 00:21:06.187 "uuid": "97033386-ad3b-5391-b0f2-87d35c547224", 00:21:06.187 "is_configured": true, 00:21:06.187 "data_offset": 2048, 00:21:06.187 "data_size": 63488 00:21:06.187 }, 00:21:06.187 { 00:21:06.187 "name": "BaseBdev3", 00:21:06.187 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:06.187 "is_configured": true, 00:21:06.187 "data_offset": 2048, 00:21:06.187 "data_size": 63488 00:21:06.187 }, 00:21:06.187 { 00:21:06.187 "name": "BaseBdev4", 00:21:06.187 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:06.187 "is_configured": true, 00:21:06.187 "data_offset": 2048, 00:21:06.187 "data_size": 63488 00:21:06.187 } 00:21:06.187 ] 00:21:06.187 }' 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.187 10:28:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:06.445 [2024-07-15 10:28:31.132997] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.445 [2024-07-15 10:28:31.152189] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:06.445 [2024-07-15 10:28:31.169052] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.445 [2024-07-15 10:28:31.170451] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.445 [2024-07-15 10:28:31.170471] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.445 [2024-07-15 10:28:31.170478] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.445 [2024-07-15 10:28:31.192037] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1fb5490 00:21:06.445 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:06.445 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:06.445 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:06.445 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.445 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.446 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.704 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.704 "name": "raid_bdev1", 00:21:06.704 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:06.704 "strip_size_kb": 0, 00:21:06.704 "state": "online", 00:21:06.704 "raid_level": "raid1", 00:21:06.704 "superblock": true, 00:21:06.704 "num_base_bdevs": 4, 00:21:06.704 "num_base_bdevs_discovered": 3, 00:21:06.704 "num_base_bdevs_operational": 3, 00:21:06.704 "base_bdevs_list": [ 00:21:06.704 { 00:21:06.704 "name": null, 00:21:06.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.704 "is_configured": false, 00:21:06.704 "data_offset": 2048, 00:21:06.704 "data_size": 63488 00:21:06.704 }, 00:21:06.704 { 00:21:06.704 "name": "BaseBdev2", 00:21:06.704 "uuid": "97033386-ad3b-5391-b0f2-87d35c547224", 00:21:06.704 "is_configured": true, 00:21:06.704 "data_offset": 2048, 00:21:06.704 "data_size": 63488 00:21:06.704 }, 00:21:06.704 { 00:21:06.704 "name": "BaseBdev3", 00:21:06.704 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:06.704 "is_configured": true, 00:21:06.704 "data_offset": 2048, 00:21:06.704 "data_size": 63488 00:21:06.704 }, 00:21:06.704 { 00:21:06.704 "name": "BaseBdev4", 00:21:06.704 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:06.704 "is_configured": true, 00:21:06.704 "data_offset": 2048, 00:21:06.704 "data_size": 63488 00:21:06.704 } 00:21:06.704 ] 00:21:06.704 }' 00:21:06.704 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.704 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.271 10:28:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.528 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.528 "name": "raid_bdev1", 00:21:07.528 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:07.528 "strip_size_kb": 0, 00:21:07.528 "state": "online", 00:21:07.528 "raid_level": "raid1", 00:21:07.528 "superblock": true, 00:21:07.528 "num_base_bdevs": 4, 00:21:07.528 "num_base_bdevs_discovered": 3, 00:21:07.528 "num_base_bdevs_operational": 3, 00:21:07.528 "base_bdevs_list": [ 00:21:07.528 { 00:21:07.528 "name": null, 00:21:07.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.528 "is_configured": false, 00:21:07.528 "data_offset": 2048, 00:21:07.528 "data_size": 63488 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "name": "BaseBdev2", 00:21:07.528 "uuid": "97033386-ad3b-5391-b0f2-87d35c547224", 00:21:07.528 "is_configured": true, 00:21:07.528 "data_offset": 2048, 00:21:07.528 "data_size": 63488 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "name": "BaseBdev3", 00:21:07.528 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:07.528 "is_configured": true, 00:21:07.528 "data_offset": 2048, 00:21:07.528 "data_size": 63488 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "name": "BaseBdev4", 00:21:07.528 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:07.528 "is_configured": true, 00:21:07.528 "data_offset": 2048, 00:21:07.528 "data_size": 63488 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 }' 00:21:07.528 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:07.528 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:07.528 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:07.528 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:07.528 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:07.786 [2024-07-15 10:28:32.351086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:07.786 10:28:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:07.786 [2024-07-15 10:28:32.390004] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb5730 00:21:07.787 [2024-07-15 10:28:32.391089] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.787 [2024-07-15 10:28:32.492547] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:07.787 [2024-07-15 10:28:32.492923] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:08.045 [2024-07-15 10:28:32.620116] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:08.045 [2024-07-15 10:28:32.620273] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:08.302 [2024-07-15 10:28:32.953387] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:08.559 [2024-07-15 10:28:33.162705] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:08.559 [2024-07-15 10:28:33.162832] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:08.816 [2024-07-15 10:28:33.380018] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:08.816 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:08.817 "name": "raid_bdev1", 00:21:08.817 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:08.817 "strip_size_kb": 0, 00:21:08.817 "state": "online", 00:21:08.817 "raid_level": "raid1", 00:21:08.817 "superblock": true, 00:21:08.817 "num_base_bdevs": 4, 00:21:08.817 "num_base_bdevs_discovered": 4, 00:21:08.817 "num_base_bdevs_operational": 4, 00:21:08.817 "process": { 00:21:08.817 "type": "rebuild", 00:21:08.817 "target": "spare", 00:21:08.817 "progress": { 00:21:08.817 "blocks": 14336, 00:21:08.817 "percent": 22 00:21:08.817 } 00:21:08.817 }, 00:21:08.817 "base_bdevs_list": [ 00:21:08.817 { 00:21:08.817 "name": "spare", 00:21:08.817 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:08.817 "is_configured": true, 00:21:08.817 "data_offset": 2048, 00:21:08.817 "data_size": 63488 00:21:08.817 }, 00:21:08.817 { 00:21:08.817 "name": "BaseBdev2", 00:21:08.817 "uuid": "97033386-ad3b-5391-b0f2-87d35c547224", 00:21:08.817 "is_configured": true, 00:21:08.817 "data_offset": 2048, 00:21:08.817 "data_size": 63488 00:21:08.817 }, 00:21:08.817 { 00:21:08.817 "name": "BaseBdev3", 00:21:08.817 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:08.817 "is_configured": true, 00:21:08.817 "data_offset": 2048, 00:21:08.817 "data_size": 63488 00:21:08.817 }, 00:21:08.817 { 00:21:08.817 "name": "BaseBdev4", 00:21:08.817 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:08.817 "is_configured": true, 00:21:08.817 "data_offset": 2048, 00:21:08.817 "data_size": 63488 00:21:08.817 } 00:21:08.817 ] 00:21:08.817 }' 00:21:08.817 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:08.817 [2024-07-15 10:28:33.589136] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:08.817 [2024-07-15 10:28:33.589287] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:21:09.076 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:21:09.076 10:28:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:09.076 [2024-07-15 10:28:33.798881] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:09.076 [2024-07-15 10:28:33.811607] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:09.076 [2024-07-15 10:28:33.812548] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:09.334 [2024-07-15 10:28:34.025002] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1fb5490 00:21:09.334 [2024-07-15 10:28:34.025022] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1fb5730 00:21:09.334 [2024-07-15 10:28:34.033048] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.334 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.592 [2024-07-15 10:28:34.158947] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:09.592 "name": "raid_bdev1", 00:21:09.592 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:09.592 "strip_size_kb": 0, 00:21:09.592 "state": "online", 00:21:09.592 "raid_level": "raid1", 00:21:09.592 "superblock": true, 00:21:09.592 "num_base_bdevs": 4, 00:21:09.592 "num_base_bdevs_discovered": 3, 00:21:09.592 "num_base_bdevs_operational": 3, 00:21:09.592 "process": { 00:21:09.592 "type": "rebuild", 00:21:09.592 "target": "spare", 00:21:09.592 "progress": { 00:21:09.592 "blocks": 22528, 00:21:09.592 "percent": 35 00:21:09.592 } 00:21:09.592 }, 00:21:09.592 "base_bdevs_list": [ 00:21:09.592 { 00:21:09.592 "name": "spare", 00:21:09.592 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:09.592 "is_configured": true, 00:21:09.592 "data_offset": 2048, 00:21:09.592 "data_size": 63488 00:21:09.592 }, 00:21:09.592 { 00:21:09.592 "name": null, 00:21:09.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.592 "is_configured": false, 00:21:09.592 "data_offset": 2048, 00:21:09.592 "data_size": 63488 00:21:09.592 }, 00:21:09.592 { 00:21:09.592 "name": "BaseBdev3", 00:21:09.592 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:09.592 "is_configured": true, 00:21:09.592 "data_offset": 2048, 00:21:09.592 "data_size": 63488 00:21:09.592 }, 00:21:09.592 { 00:21:09.592 "name": "BaseBdev4", 00:21:09.592 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:09.592 "is_configured": true, 00:21:09.592 "data_offset": 2048, 00:21:09.592 "data_size": 63488 00:21:09.592 } 00:21:09.592 ] 00:21:09.592 }' 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=735 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:09.592 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.593 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.851 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:09.851 "name": "raid_bdev1", 00:21:09.851 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:09.851 "strip_size_kb": 0, 00:21:09.851 "state": "online", 00:21:09.851 "raid_level": "raid1", 00:21:09.851 "superblock": true, 00:21:09.852 "num_base_bdevs": 4, 00:21:09.852 "num_base_bdevs_discovered": 3, 00:21:09.852 "num_base_bdevs_operational": 3, 00:21:09.852 "process": { 00:21:09.852 "type": "rebuild", 00:21:09.852 "target": "spare", 00:21:09.852 "progress": { 00:21:09.852 "blocks": 24576, 00:21:09.852 "percent": 38 00:21:09.852 } 00:21:09.852 }, 00:21:09.852 "base_bdevs_list": [ 00:21:09.852 { 00:21:09.852 "name": "spare", 00:21:09.852 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:09.852 "is_configured": true, 00:21:09.852 "data_offset": 2048, 00:21:09.852 "data_size": 63488 00:21:09.852 }, 00:21:09.852 { 00:21:09.852 "name": null, 00:21:09.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.852 "is_configured": false, 00:21:09.852 "data_offset": 2048, 00:21:09.852 "data_size": 63488 00:21:09.852 }, 00:21:09.852 { 00:21:09.852 "name": "BaseBdev3", 00:21:09.852 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:09.852 "is_configured": true, 00:21:09.852 "data_offset": 2048, 00:21:09.852 "data_size": 63488 00:21:09.852 }, 00:21:09.852 { 00:21:09.852 "name": "BaseBdev4", 00:21:09.852 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:09.852 "is_configured": true, 00:21:09.852 "data_offset": 2048, 00:21:09.852 "data_size": 63488 00:21:09.852 } 00:21:09.852 ] 00:21:09.852 }' 00:21:09.852 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:09.852 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.852 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:09.852 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.852 10:28:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:10.110 [2024-07-15 10:28:34.848625] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:10.679 [2024-07-15 10:28:35.266192] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:10.679 [2024-07-15 10:28:35.379241] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.938 [2024-07-15 10:28:35.586980] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:10.938 "name": "raid_bdev1", 00:21:10.938 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:10.938 "strip_size_kb": 0, 00:21:10.938 "state": "online", 00:21:10.938 "raid_level": "raid1", 00:21:10.938 "superblock": true, 00:21:10.938 "num_base_bdevs": 4, 00:21:10.938 "num_base_bdevs_discovered": 3, 00:21:10.938 "num_base_bdevs_operational": 3, 00:21:10.938 "process": { 00:21:10.938 "type": "rebuild", 00:21:10.938 "target": "spare", 00:21:10.938 "progress": { 00:21:10.938 "blocks": 45056, 00:21:10.938 "percent": 70 00:21:10.938 } 00:21:10.938 }, 00:21:10.938 "base_bdevs_list": [ 00:21:10.938 { 00:21:10.938 "name": "spare", 00:21:10.938 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:10.938 "is_configured": true, 00:21:10.938 "data_offset": 2048, 00:21:10.938 "data_size": 63488 00:21:10.938 }, 00:21:10.938 { 00:21:10.938 "name": null, 00:21:10.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.938 "is_configured": false, 00:21:10.938 "data_offset": 2048, 00:21:10.938 "data_size": 63488 00:21:10.938 }, 00:21:10.938 { 00:21:10.938 "name": "BaseBdev3", 00:21:10.938 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:10.938 "is_configured": true, 00:21:10.938 "data_offset": 2048, 00:21:10.938 "data_size": 63488 00:21:10.938 }, 00:21:10.938 { 00:21:10.938 "name": "BaseBdev4", 00:21:10.938 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:10.938 "is_configured": true, 00:21:10.938 "data_offset": 2048, 00:21:10.938 "data_size": 63488 00:21:10.938 } 00:21:10.938 ] 00:21:10.938 }' 00:21:10.938 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:11.196 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.196 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:11.196 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.196 10:28:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:11.196 [2024-07-15 10:28:35.803437] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:11.453 [2024-07-15 10:28:36.143327] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:11.711 [2024-07-15 10:28:36.353337] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:11.711 [2024-07-15 10:28:36.354045] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:11.969 [2024-07-15 10:28:36.566576] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.228 [2024-07-15 10:28:36.883414] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:12.228 "name": "raid_bdev1", 00:21:12.228 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:12.228 "strip_size_kb": 0, 00:21:12.228 "state": "online", 00:21:12.228 "raid_level": "raid1", 00:21:12.228 "superblock": true, 00:21:12.228 "num_base_bdevs": 4, 00:21:12.228 "num_base_bdevs_discovered": 3, 00:21:12.228 "num_base_bdevs_operational": 3, 00:21:12.228 "process": { 00:21:12.228 "type": "rebuild", 00:21:12.228 "target": "spare", 00:21:12.228 "progress": { 00:21:12.228 "blocks": 63488, 00:21:12.228 "percent": 100 00:21:12.228 } 00:21:12.228 }, 00:21:12.228 "base_bdevs_list": [ 00:21:12.228 { 00:21:12.228 "name": "spare", 00:21:12.228 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:12.228 "is_configured": true, 00:21:12.228 "data_offset": 2048, 00:21:12.228 "data_size": 63488 00:21:12.228 }, 00:21:12.228 { 00:21:12.228 "name": null, 00:21:12.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.228 "is_configured": false, 00:21:12.228 "data_offset": 2048, 00:21:12.228 "data_size": 63488 00:21:12.228 }, 00:21:12.228 { 00:21:12.228 "name": "BaseBdev3", 00:21:12.228 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:12.228 "is_configured": true, 00:21:12.228 "data_offset": 2048, 00:21:12.228 "data_size": 63488 00:21:12.228 }, 00:21:12.228 { 00:21:12.228 "name": "BaseBdev4", 00:21:12.228 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:12.228 "is_configured": true, 00:21:12.228 "data_offset": 2048, 00:21:12.228 "data_size": 63488 00:21:12.228 } 00:21:12.228 ] 00:21:12.228 }' 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:12.228 [2024-07-15 10:28:36.988932] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:12.228 [2024-07-15 10:28:36.992041] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.228 10:28:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:12.486 10:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.486 10:28:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.489 "name": "raid_bdev1", 00:21:13.489 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:13.489 "strip_size_kb": 0, 00:21:13.489 "state": "online", 00:21:13.489 "raid_level": "raid1", 00:21:13.489 "superblock": true, 00:21:13.489 "num_base_bdevs": 4, 00:21:13.489 "num_base_bdevs_discovered": 3, 00:21:13.489 "num_base_bdevs_operational": 3, 00:21:13.489 "base_bdevs_list": [ 00:21:13.489 { 00:21:13.489 "name": "spare", 00:21:13.489 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:13.489 "is_configured": true, 00:21:13.489 "data_offset": 2048, 00:21:13.489 "data_size": 63488 00:21:13.489 }, 00:21:13.489 { 00:21:13.489 "name": null, 00:21:13.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.489 "is_configured": false, 00:21:13.489 "data_offset": 2048, 00:21:13.489 "data_size": 63488 00:21:13.489 }, 00:21:13.489 { 00:21:13.489 "name": "BaseBdev3", 00:21:13.489 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:13.489 "is_configured": true, 00:21:13.489 "data_offset": 2048, 00:21:13.489 "data_size": 63488 00:21:13.489 }, 00:21:13.489 { 00:21:13.489 "name": "BaseBdev4", 00:21:13.489 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:13.489 "is_configured": true, 00:21:13.489 "data_offset": 2048, 00:21:13.489 "data_size": 63488 00:21:13.489 } 00:21:13.489 ] 00:21:13.489 }' 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:13.489 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.749 "name": "raid_bdev1", 00:21:13.749 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:13.749 "strip_size_kb": 0, 00:21:13.749 "state": "online", 00:21:13.749 "raid_level": "raid1", 00:21:13.749 "superblock": true, 00:21:13.749 "num_base_bdevs": 4, 00:21:13.749 "num_base_bdevs_discovered": 3, 00:21:13.749 "num_base_bdevs_operational": 3, 00:21:13.749 "base_bdevs_list": [ 00:21:13.749 { 00:21:13.749 "name": "spare", 00:21:13.749 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:13.749 "is_configured": true, 00:21:13.749 "data_offset": 2048, 00:21:13.749 "data_size": 63488 00:21:13.749 }, 00:21:13.749 { 00:21:13.749 "name": null, 00:21:13.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.749 "is_configured": false, 00:21:13.749 "data_offset": 2048, 00:21:13.749 "data_size": 63488 00:21:13.749 }, 00:21:13.749 { 00:21:13.749 "name": "BaseBdev3", 00:21:13.749 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:13.749 "is_configured": true, 00:21:13.749 "data_offset": 2048, 00:21:13.749 "data_size": 63488 00:21:13.749 }, 00:21:13.749 { 00:21:13.749 "name": "BaseBdev4", 00:21:13.749 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:13.749 "is_configured": true, 00:21:13.749 "data_offset": 2048, 00:21:13.749 "data_size": 63488 00:21:13.749 } 00:21:13.749 ] 00:21:13.749 }' 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:13.749 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.008 "name": "raid_bdev1", 00:21:14.008 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:14.008 "strip_size_kb": 0, 00:21:14.008 "state": "online", 00:21:14.008 "raid_level": "raid1", 00:21:14.008 "superblock": true, 00:21:14.008 "num_base_bdevs": 4, 00:21:14.008 "num_base_bdevs_discovered": 3, 00:21:14.008 "num_base_bdevs_operational": 3, 00:21:14.008 "base_bdevs_list": [ 00:21:14.008 { 00:21:14.008 "name": "spare", 00:21:14.008 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:14.008 "is_configured": true, 00:21:14.008 "data_offset": 2048, 00:21:14.008 "data_size": 63488 00:21:14.008 }, 00:21:14.008 { 00:21:14.008 "name": null, 00:21:14.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.008 "is_configured": false, 00:21:14.008 "data_offset": 2048, 00:21:14.008 "data_size": 63488 00:21:14.008 }, 00:21:14.008 { 00:21:14.008 "name": "BaseBdev3", 00:21:14.008 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:14.008 "is_configured": true, 00:21:14.008 "data_offset": 2048, 00:21:14.008 "data_size": 63488 00:21:14.008 }, 00:21:14.008 { 00:21:14.008 "name": "BaseBdev4", 00:21:14.008 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:14.008 "is_configured": true, 00:21:14.008 "data_offset": 2048, 00:21:14.008 "data_size": 63488 00:21:14.008 } 00:21:14.008 ] 00:21:14.008 }' 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.008 10:28:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:14.575 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:14.575 [2024-07-15 10:28:39.357804] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.575 [2024-07-15 10:28:39.357829] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.834 00:21:14.834 Latency(us) 00:21:14.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.834 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:14.834 raid_bdev1 : 10.67 109.28 327.85 0.00 0.00 12659.24 237.57 109051.90 00:21:14.834 =================================================================================================================== 00:21:14.834 Total : 109.28 327.85 0.00 0.00 12659.24 237.57 109051.90 00:21:14.834 [2024-07-15 10:28:39.460866] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.834 [2024-07-15 10:28:39.460887] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.834 [2024-07-15 10:28:39.460955] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.834 [2024-07-15 10:28:39.460964] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e185b0 name raid_bdev1, state offline 00:21:14.834 0 00:21:14.834 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.834 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:15.093 /dev/nbd0 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.093 1+0 records in 00:21:15.093 1+0 records out 00:21:15.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247317 s, 16.6 MB/s 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:15.093 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.094 10:28:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:15.352 /dev/nbd1 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.352 1+0 records in 00:21:15.352 1+0 records out 00:21:15.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259944 s, 15.8 MB/s 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:15.352 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.353 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.611 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:15.869 /dev/nbd1 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.869 1+0 records in 00:21:15.869 1+0 records out 00:21:15.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156667 s, 26.1 MB/s 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.869 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.127 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:21:16.385 10:28:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:16.385 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:16.643 [2024-07-15 10:28:41.315796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:16.643 [2024-07-15 10:28:41.315829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.643 [2024-07-15 10:28:41.315844] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eb0ed0 00:21:16.643 [2024-07-15 10:28:41.315868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.643 [2024-07-15 10:28:41.317012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.643 [2024-07-15 10:28:41.317034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:16.643 [2024-07-15 10:28:41.317086] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:16.643 [2024-07-15 10:28:41.317106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.643 [2024-07-15 10:28:41.317176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:16.643 [2024-07-15 10:28:41.317222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:16.643 spare 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.643 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.643 [2024-07-15 10:28:41.417512] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eb35f0 00:21:16.643 [2024-07-15 10:28:41.417523] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:16.643 [2024-07-15 10:28:41.417646] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e139a0 00:21:16.643 [2024-07-15 10:28:41.417739] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eb35f0 00:21:16.643 [2024-07-15 10:28:41.417746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1eb35f0 00:21:16.643 [2024-07-15 10:28:41.417813] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.901 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.901 "name": "raid_bdev1", 00:21:16.901 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:16.901 "strip_size_kb": 0, 00:21:16.901 "state": "online", 00:21:16.901 "raid_level": "raid1", 00:21:16.901 "superblock": true, 00:21:16.901 "num_base_bdevs": 4, 00:21:16.901 "num_base_bdevs_discovered": 3, 00:21:16.901 "num_base_bdevs_operational": 3, 00:21:16.901 "base_bdevs_list": [ 00:21:16.901 { 00:21:16.901 "name": "spare", 00:21:16.901 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:16.901 "is_configured": true, 00:21:16.901 "data_offset": 2048, 00:21:16.901 "data_size": 63488 00:21:16.901 }, 00:21:16.901 { 00:21:16.901 "name": null, 00:21:16.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.901 "is_configured": false, 00:21:16.901 "data_offset": 2048, 00:21:16.901 "data_size": 63488 00:21:16.901 }, 00:21:16.901 { 00:21:16.901 "name": "BaseBdev3", 00:21:16.901 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:16.901 "is_configured": true, 00:21:16.901 "data_offset": 2048, 00:21:16.901 "data_size": 63488 00:21:16.901 }, 00:21:16.901 { 00:21:16.901 "name": "BaseBdev4", 00:21:16.901 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:16.901 "is_configured": true, 00:21:16.901 "data_offset": 2048, 00:21:16.901 "data_size": 63488 00:21:16.901 } 00:21:16.901 ] 00:21:16.901 }' 00:21:16.901 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.901 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.467 10:28:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.467 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:17.467 "name": "raid_bdev1", 00:21:17.468 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:17.468 "strip_size_kb": 0, 00:21:17.468 "state": "online", 00:21:17.468 "raid_level": "raid1", 00:21:17.468 "superblock": true, 00:21:17.468 "num_base_bdevs": 4, 00:21:17.468 "num_base_bdevs_discovered": 3, 00:21:17.468 "num_base_bdevs_operational": 3, 00:21:17.468 "base_bdevs_list": [ 00:21:17.468 { 00:21:17.468 "name": "spare", 00:21:17.468 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:17.468 "is_configured": true, 00:21:17.468 "data_offset": 2048, 00:21:17.468 "data_size": 63488 00:21:17.468 }, 00:21:17.468 { 00:21:17.468 "name": null, 00:21:17.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.468 "is_configured": false, 00:21:17.468 "data_offset": 2048, 00:21:17.468 "data_size": 63488 00:21:17.468 }, 00:21:17.468 { 00:21:17.468 "name": "BaseBdev3", 00:21:17.468 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:17.468 "is_configured": true, 00:21:17.468 "data_offset": 2048, 00:21:17.468 "data_size": 63488 00:21:17.468 }, 00:21:17.468 { 00:21:17.468 "name": "BaseBdev4", 00:21:17.468 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:17.468 "is_configured": true, 00:21:17.468 "data_offset": 2048, 00:21:17.468 "data_size": 63488 00:21:17.468 } 00:21:17.468 ] 00:21:17.468 }' 00:21:17.468 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:17.468 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:17.468 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:17.468 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:17.726 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.726 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:17.726 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.726 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:17.985 [2024-07-15 10:28:42.567165] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.985 "name": "raid_bdev1", 00:21:17.985 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:17.985 "strip_size_kb": 0, 00:21:17.985 "state": "online", 00:21:17.985 "raid_level": "raid1", 00:21:17.985 "superblock": true, 00:21:17.985 "num_base_bdevs": 4, 00:21:17.985 "num_base_bdevs_discovered": 2, 00:21:17.985 "num_base_bdevs_operational": 2, 00:21:17.985 "base_bdevs_list": [ 00:21:17.985 { 00:21:17.985 "name": null, 00:21:17.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.985 "is_configured": false, 00:21:17.985 "data_offset": 2048, 00:21:17.985 "data_size": 63488 00:21:17.985 }, 00:21:17.985 { 00:21:17.985 "name": null, 00:21:17.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.985 "is_configured": false, 00:21:17.985 "data_offset": 2048, 00:21:17.985 "data_size": 63488 00:21:17.985 }, 00:21:17.985 { 00:21:17.985 "name": "BaseBdev3", 00:21:17.985 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:17.985 "is_configured": true, 00:21:17.985 "data_offset": 2048, 00:21:17.985 "data_size": 63488 00:21:17.985 }, 00:21:17.985 { 00:21:17.985 "name": "BaseBdev4", 00:21:17.985 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:17.985 "is_configured": true, 00:21:17.985 "data_offset": 2048, 00:21:17.985 "data_size": 63488 00:21:17.985 } 00:21:17.985 ] 00:21:17.985 }' 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.985 10:28:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:18.553 10:28:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:18.812 [2024-07-15 10:28:43.405386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.812 [2024-07-15 10:28:43.405487] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:18.812 [2024-07-15 10:28:43.405502] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:18.812 [2024-07-15 10:28:43.405522] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.812 [2024-07-15 10:28:43.409396] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb3820 00:21:18.812 [2024-07-15 10:28:43.411089] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:18.812 10:28:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.745 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.003 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:20.003 "name": "raid_bdev1", 00:21:20.003 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:20.003 "strip_size_kb": 0, 00:21:20.003 "state": "online", 00:21:20.003 "raid_level": "raid1", 00:21:20.003 "superblock": true, 00:21:20.003 "num_base_bdevs": 4, 00:21:20.003 "num_base_bdevs_discovered": 3, 00:21:20.003 "num_base_bdevs_operational": 3, 00:21:20.003 "process": { 00:21:20.003 "type": "rebuild", 00:21:20.003 "target": "spare", 00:21:20.003 "progress": { 00:21:20.003 "blocks": 22528, 00:21:20.003 "percent": 35 00:21:20.003 } 00:21:20.003 }, 00:21:20.003 "base_bdevs_list": [ 00:21:20.003 { 00:21:20.003 "name": "spare", 00:21:20.003 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:20.003 "is_configured": true, 00:21:20.003 "data_offset": 2048, 00:21:20.003 "data_size": 63488 00:21:20.003 }, 00:21:20.003 { 00:21:20.003 "name": null, 00:21:20.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.003 "is_configured": false, 00:21:20.003 "data_offset": 2048, 00:21:20.003 "data_size": 63488 00:21:20.003 }, 00:21:20.003 { 00:21:20.003 "name": "BaseBdev3", 00:21:20.003 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:20.003 "is_configured": true, 00:21:20.003 "data_offset": 2048, 00:21:20.003 "data_size": 63488 00:21:20.003 }, 00:21:20.003 { 00:21:20.003 "name": "BaseBdev4", 00:21:20.003 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:20.003 "is_configured": true, 00:21:20.003 "data_offset": 2048, 00:21:20.003 "data_size": 63488 00:21:20.003 } 00:21:20.003 ] 00:21:20.003 }' 00:21:20.003 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:20.003 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.003 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:20.003 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.003 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:20.288 [2024-07-15 10:28:44.850532] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:20.288 [2024-07-15 10:28:44.921420] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:20.288 [2024-07-15 10:28:44.921454] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.288 [2024-07-15 10:28:44.921464] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:20.288 [2024-07-15 10:28:44.921469] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.288 10:28:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.546 10:28:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.546 "name": "raid_bdev1", 00:21:20.546 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:20.546 "strip_size_kb": 0, 00:21:20.546 "state": "online", 00:21:20.546 "raid_level": "raid1", 00:21:20.546 "superblock": true, 00:21:20.546 "num_base_bdevs": 4, 00:21:20.546 "num_base_bdevs_discovered": 2, 00:21:20.546 "num_base_bdevs_operational": 2, 00:21:20.546 "base_bdevs_list": [ 00:21:20.546 { 00:21:20.546 "name": null, 00:21:20.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.546 "is_configured": false, 00:21:20.546 "data_offset": 2048, 00:21:20.546 "data_size": 63488 00:21:20.546 }, 00:21:20.546 { 00:21:20.546 "name": null, 00:21:20.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.546 "is_configured": false, 00:21:20.546 "data_offset": 2048, 00:21:20.546 "data_size": 63488 00:21:20.546 }, 00:21:20.546 { 00:21:20.546 "name": "BaseBdev3", 00:21:20.546 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:20.546 "is_configured": true, 00:21:20.546 "data_offset": 2048, 00:21:20.546 "data_size": 63488 00:21:20.546 }, 00:21:20.546 { 00:21:20.546 "name": "BaseBdev4", 00:21:20.546 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:20.546 "is_configured": true, 00:21:20.547 "data_offset": 2048, 00:21:20.547 "data_size": 63488 00:21:20.547 } 00:21:20.547 ] 00:21:20.547 }' 00:21:20.547 10:28:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.547 10:28:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:21.115 10:28:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:21.115 [2024-07-15 10:28:45.755365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.115 [2024-07-15 10:28:45.755401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.115 [2024-07-15 10:28:45.755432] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb3610 00:21:21.115 [2024-07-15 10:28:45.755441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.115 [2024-07-15 10:28:45.755707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.115 [2024-07-15 10:28:45.755732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.115 [2024-07-15 10:28:45.755787] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:21.115 [2024-07-15 10:28:45.755795] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:21:21.115 [2024-07-15 10:28:45.755802] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:21.115 [2024-07-15 10:28:45.755814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.115 [2024-07-15 10:28:45.759698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1eb3570 00:21:21.115 spare 00:21:21.115 [2024-07-15 10:28:45.760692] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.115 10:28:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.050 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.308 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:22.308 "name": "raid_bdev1", 00:21:22.308 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:22.308 "strip_size_kb": 0, 00:21:22.308 "state": "online", 00:21:22.308 "raid_level": "raid1", 00:21:22.308 "superblock": true, 00:21:22.308 "num_base_bdevs": 4, 00:21:22.308 "num_base_bdevs_discovered": 3, 00:21:22.308 "num_base_bdevs_operational": 3, 00:21:22.308 "process": { 00:21:22.308 "type": "rebuild", 00:21:22.308 "target": "spare", 00:21:22.308 "progress": { 00:21:22.308 "blocks": 22528, 00:21:22.308 "percent": 35 00:21:22.308 } 00:21:22.308 }, 00:21:22.308 "base_bdevs_list": [ 00:21:22.308 { 00:21:22.308 "name": "spare", 00:21:22.308 "uuid": "0eae68ff-92aa-59c0-9d63-93dff1518656", 00:21:22.308 "is_configured": true, 00:21:22.308 "data_offset": 2048, 00:21:22.308 "data_size": 63488 00:21:22.308 }, 00:21:22.308 { 00:21:22.308 "name": null, 00:21:22.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.308 "is_configured": false, 00:21:22.308 "data_offset": 2048, 00:21:22.308 "data_size": 63488 00:21:22.308 }, 00:21:22.308 { 00:21:22.308 "name": "BaseBdev3", 00:21:22.308 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:22.308 "is_configured": true, 00:21:22.308 "data_offset": 2048, 00:21:22.308 "data_size": 63488 00:21:22.308 }, 00:21:22.308 { 00:21:22.308 "name": "BaseBdev4", 00:21:22.308 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:22.308 "is_configured": true, 00:21:22.308 "data_offset": 2048, 00:21:22.308 "data_size": 63488 00:21:22.308 } 00:21:22.308 ] 00:21:22.308 }' 00:21:22.308 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:22.308 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.308 10:28:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:22.308 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.308 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:22.566 [2024-07-15 10:28:47.172044] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:22.566 [2024-07-15 10:28:47.271004] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:22.566 [2024-07-15 10:28:47.271041] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.566 [2024-07-15 10:28:47.271051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:22.566 [2024-07-15 10:28:47.271056] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.566 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.825 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:22.825 "name": "raid_bdev1", 00:21:22.825 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:22.825 "strip_size_kb": 0, 00:21:22.825 "state": "online", 00:21:22.825 "raid_level": "raid1", 00:21:22.825 "superblock": true, 00:21:22.825 "num_base_bdevs": 4, 00:21:22.825 "num_base_bdevs_discovered": 2, 00:21:22.825 "num_base_bdevs_operational": 2, 00:21:22.825 "base_bdevs_list": [ 00:21:22.825 { 00:21:22.825 "name": null, 00:21:22.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.825 "is_configured": false, 00:21:22.825 "data_offset": 2048, 00:21:22.825 "data_size": 63488 00:21:22.825 }, 00:21:22.825 { 00:21:22.825 "name": null, 00:21:22.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.825 "is_configured": false, 00:21:22.825 "data_offset": 2048, 00:21:22.825 "data_size": 63488 00:21:22.825 }, 00:21:22.825 { 00:21:22.825 "name": "BaseBdev3", 00:21:22.825 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:22.825 "is_configured": true, 00:21:22.825 "data_offset": 2048, 00:21:22.825 "data_size": 63488 00:21:22.825 }, 00:21:22.825 { 00:21:22.825 "name": "BaseBdev4", 00:21:22.825 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:22.825 "is_configured": true, 00:21:22.825 "data_offset": 2048, 00:21:22.825 "data_size": 63488 00:21:22.825 } 00:21:22.825 ] 00:21:22.825 }' 00:21:22.825 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:22.825 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.392 10:28:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.392 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.392 "name": "raid_bdev1", 00:21:23.392 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:23.392 "strip_size_kb": 0, 00:21:23.392 "state": "online", 00:21:23.392 "raid_level": "raid1", 00:21:23.392 "superblock": true, 00:21:23.392 "num_base_bdevs": 4, 00:21:23.392 "num_base_bdevs_discovered": 2, 00:21:23.392 "num_base_bdevs_operational": 2, 00:21:23.392 "base_bdevs_list": [ 00:21:23.392 { 00:21:23.392 "name": null, 00:21:23.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.392 "is_configured": false, 00:21:23.392 "data_offset": 2048, 00:21:23.392 "data_size": 63488 00:21:23.392 }, 00:21:23.392 { 00:21:23.392 "name": null, 00:21:23.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.392 "is_configured": false, 00:21:23.392 "data_offset": 2048, 00:21:23.392 "data_size": 63488 00:21:23.392 }, 00:21:23.392 { 00:21:23.392 "name": "BaseBdev3", 00:21:23.392 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:23.392 "is_configured": true, 00:21:23.392 "data_offset": 2048, 00:21:23.392 "data_size": 63488 00:21:23.392 }, 00:21:23.392 { 00:21:23.392 "name": "BaseBdev4", 00:21:23.392 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:23.392 "is_configured": true, 00:21:23.392 "data_offset": 2048, 00:21:23.392 "data_size": 63488 00:21:23.392 } 00:21:23.392 ] 00:21:23.392 }' 00:21:23.392 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:23.651 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:23.651 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:23.651 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:23.651 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:23.651 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:23.910 [2024-07-15 10:28:48.558325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:23.910 [2024-07-15 10:28:48.558360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.910 [2024-07-15 10:28:48.558375] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e16140 00:21:23.910 [2024-07-15 10:28:48.558383] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.910 [2024-07-15 10:28:48.558623] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.910 [2024-07-15 10:28:48.558636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:23.910 [2024-07-15 10:28:48.558680] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:23.910 [2024-07-15 10:28:48.558688] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:23.910 [2024-07-15 10:28:48.558695] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:23.910 BaseBdev1 00:21:23.910 10:28:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.845 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.104 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.104 "name": "raid_bdev1", 00:21:25.104 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:25.104 "strip_size_kb": 0, 00:21:25.104 "state": "online", 00:21:25.104 "raid_level": "raid1", 00:21:25.104 "superblock": true, 00:21:25.104 "num_base_bdevs": 4, 00:21:25.104 "num_base_bdevs_discovered": 2, 00:21:25.104 "num_base_bdevs_operational": 2, 00:21:25.104 "base_bdevs_list": [ 00:21:25.104 { 00:21:25.104 "name": null, 00:21:25.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.104 "is_configured": false, 00:21:25.104 "data_offset": 2048, 00:21:25.104 "data_size": 63488 00:21:25.104 }, 00:21:25.104 { 00:21:25.104 "name": null, 00:21:25.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.104 "is_configured": false, 00:21:25.104 "data_offset": 2048, 00:21:25.104 "data_size": 63488 00:21:25.104 }, 00:21:25.104 { 00:21:25.104 "name": "BaseBdev3", 00:21:25.104 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:25.105 "is_configured": true, 00:21:25.105 "data_offset": 2048, 00:21:25.105 "data_size": 63488 00:21:25.105 }, 00:21:25.105 { 00:21:25.105 "name": "BaseBdev4", 00:21:25.105 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:25.105 "is_configured": true, 00:21:25.105 "data_offset": 2048, 00:21:25.105 "data_size": 63488 00:21:25.105 } 00:21:25.105 ] 00:21:25.105 }' 00:21:25.105 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.105 10:28:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:25.672 "name": "raid_bdev1", 00:21:25.672 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:25.672 "strip_size_kb": 0, 00:21:25.672 "state": "online", 00:21:25.672 "raid_level": "raid1", 00:21:25.672 "superblock": true, 00:21:25.672 "num_base_bdevs": 4, 00:21:25.672 "num_base_bdevs_discovered": 2, 00:21:25.672 "num_base_bdevs_operational": 2, 00:21:25.672 "base_bdevs_list": [ 00:21:25.672 { 00:21:25.672 "name": null, 00:21:25.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.672 "is_configured": false, 00:21:25.672 "data_offset": 2048, 00:21:25.672 "data_size": 63488 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "name": null, 00:21:25.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.672 "is_configured": false, 00:21:25.672 "data_offset": 2048, 00:21:25.672 "data_size": 63488 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "name": "BaseBdev3", 00:21:25.672 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:25.672 "is_configured": true, 00:21:25.672 "data_offset": 2048, 00:21:25.672 "data_size": 63488 00:21:25.672 }, 00:21:25.672 { 00:21:25.672 "name": "BaseBdev4", 00:21:25.672 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:25.672 "is_configured": true, 00:21:25.672 "data_offset": 2048, 00:21:25.672 "data_size": 63488 00:21:25.672 } 00:21:25.672 ] 00:21:25.672 }' 00:21:25.672 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.931 [2024-07-15 10:28:50.675976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.931 [2024-07-15 10:28:50.676067] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:21:25.931 [2024-07-15 10:28:50.676081] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:25.931 request: 00:21:25.931 { 00:21:25.931 "base_bdev": "BaseBdev1", 00:21:25.931 "raid_bdev": "raid_bdev1", 00:21:25.931 "method": "bdev_raid_add_base_bdev", 00:21:25.931 "req_id": 1 00:21:25.931 } 00:21:25.931 Got JSON-RPC error response 00:21:25.931 response: 00:21:25.931 { 00:21:25.931 "code": -22, 00:21:25.931 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:25.931 } 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:25.931 10:28:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.937 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.195 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.195 "name": "raid_bdev1", 00:21:27.195 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:27.195 "strip_size_kb": 0, 00:21:27.195 "state": "online", 00:21:27.195 "raid_level": "raid1", 00:21:27.195 "superblock": true, 00:21:27.195 "num_base_bdevs": 4, 00:21:27.195 "num_base_bdevs_discovered": 2, 00:21:27.195 "num_base_bdevs_operational": 2, 00:21:27.195 "base_bdevs_list": [ 00:21:27.195 { 00:21:27.195 "name": null, 00:21:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.195 "is_configured": false, 00:21:27.195 "data_offset": 2048, 00:21:27.195 "data_size": 63488 00:21:27.195 }, 00:21:27.195 { 00:21:27.195 "name": null, 00:21:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.196 "is_configured": false, 00:21:27.196 "data_offset": 2048, 00:21:27.196 "data_size": 63488 00:21:27.196 }, 00:21:27.196 { 00:21:27.196 "name": "BaseBdev3", 00:21:27.196 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:27.196 "is_configured": true, 00:21:27.196 "data_offset": 2048, 00:21:27.196 "data_size": 63488 00:21:27.196 }, 00:21:27.196 { 00:21:27.196 "name": "BaseBdev4", 00:21:27.196 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:27.196 "is_configured": true, 00:21:27.196 "data_offset": 2048, 00:21:27.196 "data_size": 63488 00:21:27.196 } 00:21:27.196 ] 00:21:27.196 }' 00:21:27.196 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.196 10:28:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.762 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:27.762 "name": "raid_bdev1", 00:21:27.762 "uuid": "e3744857-986d-4b8c-81f0-05966d86ad11", 00:21:27.762 "strip_size_kb": 0, 00:21:27.762 "state": "online", 00:21:27.762 "raid_level": "raid1", 00:21:27.762 "superblock": true, 00:21:27.762 "num_base_bdevs": 4, 00:21:27.762 "num_base_bdevs_discovered": 2, 00:21:27.762 "num_base_bdevs_operational": 2, 00:21:27.762 "base_bdevs_list": [ 00:21:27.763 { 00:21:27.763 "name": null, 00:21:27.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.763 "is_configured": false, 00:21:27.763 "data_offset": 2048, 00:21:27.763 "data_size": 63488 00:21:27.763 }, 00:21:27.763 { 00:21:27.763 "name": null, 00:21:27.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.763 "is_configured": false, 00:21:27.763 "data_offset": 2048, 00:21:27.763 "data_size": 63488 00:21:27.763 }, 00:21:27.763 { 00:21:27.763 "name": "BaseBdev3", 00:21:27.763 "uuid": "c21d0bed-8cba-59c0-b2ac-fc682ee08e2a", 00:21:27.763 "is_configured": true, 00:21:27.763 "data_offset": 2048, 00:21:27.763 "data_size": 63488 00:21:27.763 }, 00:21:27.763 { 00:21:27.763 "name": "BaseBdev4", 00:21:27.763 "uuid": "d10e1935-7382-5423-af00-67fa4ed3d5fc", 00:21:27.763 "is_configured": true, 00:21:27.763 "data_offset": 2048, 00:21:27.763 "data_size": 63488 00:21:27.763 } 00:21:27.763 ] 00:21:27.763 }' 00:21:27.763 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 1877247 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 1877247 ']' 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 1877247 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1877247 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1877247' 00:21:28.021 killing process with pid 1877247 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 1877247 00:21:28.021 Received shutdown signal, test time was about 23.849184 seconds 00:21:28.021 00:21:28.021 Latency(us) 00:21:28.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.021 =================================================================================================================== 00:21:28.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.021 [2024-07-15 10:28:52.669746] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:28.021 [2024-07-15 10:28:52.669817] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.021 [2024-07-15 10:28:52.669857] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.021 [2024-07-15 10:28:52.669865] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eb35f0 name raid_bdev1, state offline 00:21:28.021 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 1877247 00:21:28.021 [2024-07-15 10:28:52.702218] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.281 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:21:28.281 00:21:28.281 real 0m28.056s 00:21:28.281 user 0m42.311s 00:21:28.281 sys 0m4.278s 00:21:28.281 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.281 10:28:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.281 ************************************ 00:21:28.281 END TEST raid_rebuild_test_sb_io 00:21:28.281 ************************************ 00:21:28.281 10:28:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:28.281 10:28:52 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:21:28.281 10:28:52 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:21:28.281 10:28:52 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:28.281 10:28:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:28.281 10:28:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.281 10:28:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.281 ************************************ 00:21:28.281 START TEST raid_state_function_test_sb_4k 00:21:28.281 ************************************ 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=1882438 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1882438' 00:21:28.281 Process raid pid: 1882438 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 1882438 /var/tmp/spdk-raid.sock 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 1882438 ']' 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.281 10:28:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:28.281 [2024-07-15 10:28:53.020206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:28.281 [2024-07-15 10:28:53.020248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:28.281 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.281 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.543 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:28.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:28.544 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:28.544 [2024-07-15 10:28:53.110950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.544 [2024-07-15 10:28:53.183757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.544 [2024-07-15 10:28:53.237133] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.544 [2024-07-15 10:28:53.237156] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.110 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.110 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:21:29.110 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:29.368 [2024-07-15 10:28:53.960516] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.368 [2024-07-15 10:28:53.960548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.368 [2024-07-15 10:28:53.960556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.368 [2024-07-15 10:28:53.960563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.368 10:28:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.368 10:28:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.368 "name": "Existed_Raid", 00:21:29.368 "uuid": "336dc696-517f-41fd-8dd1-8864e1d1701a", 00:21:29.368 "strip_size_kb": 0, 00:21:29.368 "state": "configuring", 00:21:29.368 "raid_level": "raid1", 00:21:29.368 "superblock": true, 00:21:29.368 "num_base_bdevs": 2, 00:21:29.368 "num_base_bdevs_discovered": 0, 00:21:29.368 "num_base_bdevs_operational": 2, 00:21:29.368 "base_bdevs_list": [ 00:21:29.368 { 00:21:29.368 "name": "BaseBdev1", 00:21:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.368 "is_configured": false, 00:21:29.368 "data_offset": 0, 00:21:29.368 "data_size": 0 00:21:29.368 }, 00:21:29.368 { 00:21:29.368 "name": "BaseBdev2", 00:21:29.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.368 "is_configured": false, 00:21:29.368 "data_offset": 0, 00:21:29.368 "data_size": 0 00:21:29.368 } 00:21:29.368 ] 00:21:29.368 }' 00:21:29.368 10:28:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.368 10:28:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:29.933 10:28:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:30.191 [2024-07-15 10:28:54.786542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:30.191 [2024-07-15 10:28:54.786564] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1185f20 name Existed_Raid, state configuring 00:21:30.191 10:28:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:30.191 [2024-07-15 10:28:54.963012] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.191 [2024-07-15 10:28:54.963031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.191 [2024-07-15 10:28:54.963037] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.191 [2024-07-15 10:28:54.963045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.450 10:28:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:21:30.450 [2024-07-15 10:28:55.151950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.450 BaseBdev1 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:30.450 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:30.708 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:30.966 [ 00:21:30.966 { 00:21:30.966 "name": "BaseBdev1", 00:21:30.966 "aliases": [ 00:21:30.966 "edf9b8c1-797d-45be-8c24-256e64ee7aa7" 00:21:30.966 ], 00:21:30.966 "product_name": "Malloc disk", 00:21:30.966 "block_size": 4096, 00:21:30.966 "num_blocks": 8192, 00:21:30.966 "uuid": "edf9b8c1-797d-45be-8c24-256e64ee7aa7", 00:21:30.966 "assigned_rate_limits": { 00:21:30.966 "rw_ios_per_sec": 0, 00:21:30.966 "rw_mbytes_per_sec": 0, 00:21:30.966 "r_mbytes_per_sec": 0, 00:21:30.966 "w_mbytes_per_sec": 0 00:21:30.966 }, 00:21:30.966 "claimed": true, 00:21:30.966 "claim_type": "exclusive_write", 00:21:30.966 "zoned": false, 00:21:30.966 "supported_io_types": { 00:21:30.966 "read": true, 00:21:30.966 "write": true, 00:21:30.966 "unmap": true, 00:21:30.966 "flush": true, 00:21:30.966 "reset": true, 00:21:30.966 "nvme_admin": false, 00:21:30.966 "nvme_io": false, 00:21:30.966 "nvme_io_md": false, 00:21:30.966 "write_zeroes": true, 00:21:30.966 "zcopy": true, 00:21:30.966 "get_zone_info": false, 00:21:30.966 "zone_management": false, 00:21:30.966 "zone_append": false, 00:21:30.967 "compare": false, 00:21:30.967 "compare_and_write": false, 00:21:30.967 "abort": true, 00:21:30.967 "seek_hole": false, 00:21:30.967 "seek_data": false, 00:21:30.967 "copy": true, 00:21:30.967 "nvme_iov_md": false 00:21:30.967 }, 00:21:30.967 "memory_domains": [ 00:21:30.967 { 00:21:30.967 "dma_device_id": "system", 00:21:30.967 "dma_device_type": 1 00:21:30.967 }, 00:21:30.967 { 00:21:30.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.967 "dma_device_type": 2 00:21:30.967 } 00:21:30.967 ], 00:21:30.967 "driver_specific": {} 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.967 "name": "Existed_Raid", 00:21:30.967 "uuid": "502d91a9-3a79-4be6-b080-37d87b66d1bb", 00:21:30.967 "strip_size_kb": 0, 00:21:30.967 "state": "configuring", 00:21:30.967 "raid_level": "raid1", 00:21:30.967 "superblock": true, 00:21:30.967 "num_base_bdevs": 2, 00:21:30.967 "num_base_bdevs_discovered": 1, 00:21:30.967 "num_base_bdevs_operational": 2, 00:21:30.967 "base_bdevs_list": [ 00:21:30.967 { 00:21:30.967 "name": "BaseBdev1", 00:21:30.967 "uuid": "edf9b8c1-797d-45be-8c24-256e64ee7aa7", 00:21:30.967 "is_configured": true, 00:21:30.967 "data_offset": 256, 00:21:30.967 "data_size": 7936 00:21:30.967 }, 00:21:30.967 { 00:21:30.967 "name": "BaseBdev2", 00:21:30.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.967 "is_configured": false, 00:21:30.967 "data_offset": 0, 00:21:30.967 "data_size": 0 00:21:30.967 } 00:21:30.967 ] 00:21:30.967 }' 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.967 10:28:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:31.533 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:31.792 [2024-07-15 10:28:56.326965] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:31.792 [2024-07-15 10:28:56.326993] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1185810 name Existed_Raid, state configuring 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:21:31.792 [2024-07-15 10:28:56.503452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.792 [2024-07-15 10:28:56.504535] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.792 [2024-07-15 10:28:56.504564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.792 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.050 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:32.050 "name": "Existed_Raid", 00:21:32.050 "uuid": "ebe1e139-c724-4c71-88cb-d7e3291c286b", 00:21:32.050 "strip_size_kb": 0, 00:21:32.050 "state": "configuring", 00:21:32.050 "raid_level": "raid1", 00:21:32.050 "superblock": true, 00:21:32.050 "num_base_bdevs": 2, 00:21:32.050 "num_base_bdevs_discovered": 1, 00:21:32.050 "num_base_bdevs_operational": 2, 00:21:32.050 "base_bdevs_list": [ 00:21:32.050 { 00:21:32.050 "name": "BaseBdev1", 00:21:32.050 "uuid": "edf9b8c1-797d-45be-8c24-256e64ee7aa7", 00:21:32.050 "is_configured": true, 00:21:32.050 "data_offset": 256, 00:21:32.050 "data_size": 7936 00:21:32.050 }, 00:21:32.050 { 00:21:32.050 "name": "BaseBdev2", 00:21:32.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.050 "is_configured": false, 00:21:32.050 "data_offset": 0, 00:21:32.050 "data_size": 0 00:21:32.050 } 00:21:32.050 ] 00:21:32.050 }' 00:21:32.050 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:32.050 10:28:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:21:32.617 [2024-07-15 10:28:57.316171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.617 [2024-07-15 10:28:57.316279] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1186600 00:21:32.617 [2024-07-15 10:28:57.316288] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:32.617 [2024-07-15 10:28:57.316406] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11879c0 00:21:32.617 [2024-07-15 10:28:57.316488] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1186600 00:21:32.617 [2024-07-15 10:28:57.316494] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1186600 00:21:32.617 [2024-07-15 10:28:57.316556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.617 BaseBdev2 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:32.617 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:32.875 [ 00:21:32.875 { 00:21:32.875 "name": "BaseBdev2", 00:21:32.875 "aliases": [ 00:21:32.875 "5773ac35-133f-4ede-95ad-76aad7fa2137" 00:21:32.875 ], 00:21:32.875 "product_name": "Malloc disk", 00:21:32.875 "block_size": 4096, 00:21:32.875 "num_blocks": 8192, 00:21:32.875 "uuid": "5773ac35-133f-4ede-95ad-76aad7fa2137", 00:21:32.875 "assigned_rate_limits": { 00:21:32.875 "rw_ios_per_sec": 0, 00:21:32.875 "rw_mbytes_per_sec": 0, 00:21:32.875 "r_mbytes_per_sec": 0, 00:21:32.875 "w_mbytes_per_sec": 0 00:21:32.875 }, 00:21:32.875 "claimed": true, 00:21:32.875 "claim_type": "exclusive_write", 00:21:32.875 "zoned": false, 00:21:32.875 "supported_io_types": { 00:21:32.875 "read": true, 00:21:32.875 "write": true, 00:21:32.875 "unmap": true, 00:21:32.875 "flush": true, 00:21:32.875 "reset": true, 00:21:32.875 "nvme_admin": false, 00:21:32.875 "nvme_io": false, 00:21:32.875 "nvme_io_md": false, 00:21:32.875 "write_zeroes": true, 00:21:32.875 "zcopy": true, 00:21:32.875 "get_zone_info": false, 00:21:32.875 "zone_management": false, 00:21:32.875 "zone_append": false, 00:21:32.875 "compare": false, 00:21:32.875 "compare_and_write": false, 00:21:32.875 "abort": true, 00:21:32.875 "seek_hole": false, 00:21:32.875 "seek_data": false, 00:21:32.875 "copy": true, 00:21:32.875 "nvme_iov_md": false 00:21:32.875 }, 00:21:32.875 "memory_domains": [ 00:21:32.875 { 00:21:32.875 "dma_device_id": "system", 00:21:32.875 "dma_device_type": 1 00:21:32.875 }, 00:21:32.875 { 00:21:32.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.875 "dma_device_type": 2 00:21:32.875 } 00:21:32.875 ], 00:21:32.875 "driver_specific": {} 00:21:32.875 } 00:21:32.875 ] 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.875 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.876 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.876 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.876 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.134 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.134 "name": "Existed_Raid", 00:21:33.134 "uuid": "ebe1e139-c724-4c71-88cb-d7e3291c286b", 00:21:33.134 "strip_size_kb": 0, 00:21:33.134 "state": "online", 00:21:33.134 "raid_level": "raid1", 00:21:33.134 "superblock": true, 00:21:33.134 "num_base_bdevs": 2, 00:21:33.134 "num_base_bdevs_discovered": 2, 00:21:33.134 "num_base_bdevs_operational": 2, 00:21:33.134 "base_bdevs_list": [ 00:21:33.134 { 00:21:33.134 "name": "BaseBdev1", 00:21:33.134 "uuid": "edf9b8c1-797d-45be-8c24-256e64ee7aa7", 00:21:33.134 "is_configured": true, 00:21:33.134 "data_offset": 256, 00:21:33.134 "data_size": 7936 00:21:33.134 }, 00:21:33.134 { 00:21:33.134 "name": "BaseBdev2", 00:21:33.134 "uuid": "5773ac35-133f-4ede-95ad-76aad7fa2137", 00:21:33.134 "is_configured": true, 00:21:33.134 "data_offset": 256, 00:21:33.134 "data_size": 7936 00:21:33.134 } 00:21:33.134 ] 00:21:33.134 }' 00:21:33.134 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.134 10:28:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:33.701 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:33.702 [2024-07-15 10:28:58.447300] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:33.702 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:33.702 "name": "Existed_Raid", 00:21:33.702 "aliases": [ 00:21:33.702 "ebe1e139-c724-4c71-88cb-d7e3291c286b" 00:21:33.702 ], 00:21:33.702 "product_name": "Raid Volume", 00:21:33.702 "block_size": 4096, 00:21:33.702 "num_blocks": 7936, 00:21:33.702 "uuid": "ebe1e139-c724-4c71-88cb-d7e3291c286b", 00:21:33.702 "assigned_rate_limits": { 00:21:33.702 "rw_ios_per_sec": 0, 00:21:33.702 "rw_mbytes_per_sec": 0, 00:21:33.702 "r_mbytes_per_sec": 0, 00:21:33.702 "w_mbytes_per_sec": 0 00:21:33.702 }, 00:21:33.702 "claimed": false, 00:21:33.702 "zoned": false, 00:21:33.702 "supported_io_types": { 00:21:33.702 "read": true, 00:21:33.702 "write": true, 00:21:33.702 "unmap": false, 00:21:33.702 "flush": false, 00:21:33.702 "reset": true, 00:21:33.702 "nvme_admin": false, 00:21:33.702 "nvme_io": false, 00:21:33.702 "nvme_io_md": false, 00:21:33.702 "write_zeroes": true, 00:21:33.702 "zcopy": false, 00:21:33.702 "get_zone_info": false, 00:21:33.702 "zone_management": false, 00:21:33.702 "zone_append": false, 00:21:33.702 "compare": false, 00:21:33.702 "compare_and_write": false, 00:21:33.702 "abort": false, 00:21:33.702 "seek_hole": false, 00:21:33.702 "seek_data": false, 00:21:33.702 "copy": false, 00:21:33.702 "nvme_iov_md": false 00:21:33.702 }, 00:21:33.702 "memory_domains": [ 00:21:33.702 { 00:21:33.702 "dma_device_id": "system", 00:21:33.702 "dma_device_type": 1 00:21:33.702 }, 00:21:33.702 { 00:21:33.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.702 "dma_device_type": 2 00:21:33.702 }, 00:21:33.702 { 00:21:33.702 "dma_device_id": "system", 00:21:33.702 "dma_device_type": 1 00:21:33.702 }, 00:21:33.702 { 00:21:33.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.702 "dma_device_type": 2 00:21:33.702 } 00:21:33.702 ], 00:21:33.702 "driver_specific": { 00:21:33.702 "raid": { 00:21:33.702 "uuid": "ebe1e139-c724-4c71-88cb-d7e3291c286b", 00:21:33.702 "strip_size_kb": 0, 00:21:33.702 "state": "online", 00:21:33.702 "raid_level": "raid1", 00:21:33.702 "superblock": true, 00:21:33.702 "num_base_bdevs": 2, 00:21:33.702 "num_base_bdevs_discovered": 2, 00:21:33.702 "num_base_bdevs_operational": 2, 00:21:33.702 "base_bdevs_list": [ 00:21:33.702 { 00:21:33.702 "name": "BaseBdev1", 00:21:33.702 "uuid": "edf9b8c1-797d-45be-8c24-256e64ee7aa7", 00:21:33.702 "is_configured": true, 00:21:33.702 "data_offset": 256, 00:21:33.702 "data_size": 7936 00:21:33.702 }, 00:21:33.702 { 00:21:33.702 "name": "BaseBdev2", 00:21:33.702 "uuid": "5773ac35-133f-4ede-95ad-76aad7fa2137", 00:21:33.702 "is_configured": true, 00:21:33.702 "data_offset": 256, 00:21:33.702 "data_size": 7936 00:21:33.702 } 00:21:33.702 ] 00:21:33.702 } 00:21:33.702 } 00:21:33.702 }' 00:21:33.702 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:33.960 BaseBdev2' 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.960 "name": "BaseBdev1", 00:21:33.960 "aliases": [ 00:21:33.960 "edf9b8c1-797d-45be-8c24-256e64ee7aa7" 00:21:33.960 ], 00:21:33.960 "product_name": "Malloc disk", 00:21:33.960 "block_size": 4096, 00:21:33.960 "num_blocks": 8192, 00:21:33.960 "uuid": "edf9b8c1-797d-45be-8c24-256e64ee7aa7", 00:21:33.960 "assigned_rate_limits": { 00:21:33.960 "rw_ios_per_sec": 0, 00:21:33.960 "rw_mbytes_per_sec": 0, 00:21:33.960 "r_mbytes_per_sec": 0, 00:21:33.960 "w_mbytes_per_sec": 0 00:21:33.960 }, 00:21:33.960 "claimed": true, 00:21:33.960 "claim_type": "exclusive_write", 00:21:33.960 "zoned": false, 00:21:33.960 "supported_io_types": { 00:21:33.960 "read": true, 00:21:33.960 "write": true, 00:21:33.960 "unmap": true, 00:21:33.960 "flush": true, 00:21:33.960 "reset": true, 00:21:33.960 "nvme_admin": false, 00:21:33.960 "nvme_io": false, 00:21:33.960 "nvme_io_md": false, 00:21:33.960 "write_zeroes": true, 00:21:33.960 "zcopy": true, 00:21:33.960 "get_zone_info": false, 00:21:33.960 "zone_management": false, 00:21:33.960 "zone_append": false, 00:21:33.960 "compare": false, 00:21:33.960 "compare_and_write": false, 00:21:33.960 "abort": true, 00:21:33.960 "seek_hole": false, 00:21:33.960 "seek_data": false, 00:21:33.960 "copy": true, 00:21:33.960 "nvme_iov_md": false 00:21:33.960 }, 00:21:33.960 "memory_domains": [ 00:21:33.960 { 00:21:33.960 "dma_device_id": "system", 00:21:33.960 "dma_device_type": 1 00:21:33.960 }, 00:21:33.960 { 00:21:33.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.960 "dma_device_type": 2 00:21:33.960 } 00:21:33.960 ], 00:21:33.960 "driver_specific": {} 00:21:33.960 }' 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.960 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.961 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:33.961 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.246 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:34.247 10:28:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.505 "name": "BaseBdev2", 00:21:34.505 "aliases": [ 00:21:34.505 "5773ac35-133f-4ede-95ad-76aad7fa2137" 00:21:34.505 ], 00:21:34.505 "product_name": "Malloc disk", 00:21:34.505 "block_size": 4096, 00:21:34.505 "num_blocks": 8192, 00:21:34.505 "uuid": "5773ac35-133f-4ede-95ad-76aad7fa2137", 00:21:34.505 "assigned_rate_limits": { 00:21:34.505 "rw_ios_per_sec": 0, 00:21:34.505 "rw_mbytes_per_sec": 0, 00:21:34.505 "r_mbytes_per_sec": 0, 00:21:34.505 "w_mbytes_per_sec": 0 00:21:34.505 }, 00:21:34.505 "claimed": true, 00:21:34.505 "claim_type": "exclusive_write", 00:21:34.505 "zoned": false, 00:21:34.505 "supported_io_types": { 00:21:34.505 "read": true, 00:21:34.505 "write": true, 00:21:34.505 "unmap": true, 00:21:34.505 "flush": true, 00:21:34.505 "reset": true, 00:21:34.505 "nvme_admin": false, 00:21:34.505 "nvme_io": false, 00:21:34.505 "nvme_io_md": false, 00:21:34.505 "write_zeroes": true, 00:21:34.505 "zcopy": true, 00:21:34.505 "get_zone_info": false, 00:21:34.505 "zone_management": false, 00:21:34.505 "zone_append": false, 00:21:34.505 "compare": false, 00:21:34.505 "compare_and_write": false, 00:21:34.505 "abort": true, 00:21:34.505 "seek_hole": false, 00:21:34.505 "seek_data": false, 00:21:34.505 "copy": true, 00:21:34.505 "nvme_iov_md": false 00:21:34.505 }, 00:21:34.505 "memory_domains": [ 00:21:34.505 { 00:21:34.505 "dma_device_id": "system", 00:21:34.505 "dma_device_type": 1 00:21:34.505 }, 00:21:34.505 { 00:21:34.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.505 "dma_device_type": 2 00:21:34.505 } 00:21:34.505 ], 00:21:34.505 "driver_specific": {} 00:21:34.505 }' 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.505 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.763 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.763 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.763 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.763 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.763 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.763 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:35.022 [2024-07-15 10:28:59.590106] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.022 "name": "Existed_Raid", 00:21:35.022 "uuid": "ebe1e139-c724-4c71-88cb-d7e3291c286b", 00:21:35.022 "strip_size_kb": 0, 00:21:35.022 "state": "online", 00:21:35.022 "raid_level": "raid1", 00:21:35.022 "superblock": true, 00:21:35.022 "num_base_bdevs": 2, 00:21:35.022 "num_base_bdevs_discovered": 1, 00:21:35.022 "num_base_bdevs_operational": 1, 00:21:35.022 "base_bdevs_list": [ 00:21:35.022 { 00:21:35.022 "name": null, 00:21:35.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.022 "is_configured": false, 00:21:35.022 "data_offset": 256, 00:21:35.022 "data_size": 7936 00:21:35.022 }, 00:21:35.022 { 00:21:35.022 "name": "BaseBdev2", 00:21:35.022 "uuid": "5773ac35-133f-4ede-95ad-76aad7fa2137", 00:21:35.022 "is_configured": true, 00:21:35.022 "data_offset": 256, 00:21:35.022 "data_size": 7936 00:21:35.022 } 00:21:35.022 ] 00:21:35.022 }' 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.022 10:28:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.589 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:35.589 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:35.589 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.589 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:35.847 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:35.847 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:35.847 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:35.847 [2024-07-15 10:29:00.613660] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:35.847 [2024-07-15 10:29:00.613719] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.847 [2024-07-15 10:29:00.623509] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.847 [2024-07-15 10:29:00.623532] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.847 [2024-07-15 10:29:00.623540] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1186600 name Existed_Raid, state offline 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 1882438 00:21:36.104 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 1882438 ']' 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 1882438 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882438 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882438' 00:21:36.105 killing process with pid 1882438 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 1882438 00:21:36.105 [2024-07-15 10:29:00.879108] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.105 10:29:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 1882438 00:21:36.105 [2024-07-15 10:29:00.879890] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.363 10:29:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:21:36.363 00:21:36.363 real 0m8.093s 00:21:36.363 user 0m14.135s 00:21:36.363 sys 0m1.712s 00:21:36.363 10:29:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:36.363 10:29:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.363 ************************************ 00:21:36.363 END TEST raid_state_function_test_sb_4k 00:21:36.363 ************************************ 00:21:36.363 10:29:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:36.363 10:29:01 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:36.363 10:29:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:36.363 10:29:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.363 10:29:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.363 ************************************ 00:21:36.363 START TEST raid_superblock_test_4k 00:21:36.363 ************************************ 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=1884026 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 1884026 /var/tmp/spdk-raid.sock 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 1884026 ']' 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:36.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.363 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.621 [2024-07-15 10:29:01.173342] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:36.621 [2024-07-15 10:29:01.173385] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1884026 ] 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:36.621 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:36.621 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:36.621 [2024-07-15 10:29:01.265736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.621 [2024-07-15 10:29:01.344326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.621 [2024-07-15 10:29:01.401359] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.621 [2024-07-15 10:29:01.401383] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:37.188 10:29:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:21:37.447 malloc1 00:21:37.447 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:37.706 [2024-07-15 10:29:02.301982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:37.706 [2024-07-15 10:29:02.302018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.706 [2024-07-15 10:29:02.302032] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x81c2f0 00:21:37.706 [2024-07-15 10:29:02.302041] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.706 [2024-07-15 10:29:02.303180] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.706 [2024-07-15 10:29:02.303202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:37.706 pt1 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:21:37.706 malloc2 00:21:37.706 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:37.965 [2024-07-15 10:29:02.618498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:37.965 [2024-07-15 10:29:02.618529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.965 [2024-07-15 10:29:02.618541] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x81d6d0 00:21:37.965 [2024-07-15 10:29:02.618549] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.965 [2024-07-15 10:29:02.619598] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.965 [2024-07-15 10:29:02.619621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:37.965 pt2 00:21:37.965 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:37.965 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:37.965 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:21:38.224 [2024-07-15 10:29:02.778921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:38.224 [2024-07-15 10:29:02.779757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.224 [2024-07-15 10:29:02.779853] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9b6310 00:21:38.224 [2024-07-15 10:29:02.779861] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.224 [2024-07-15 10:29:02.780015] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9b5ce0 00:21:38.224 [2024-07-15 10:29:02.780109] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9b6310 00:21:38.224 [2024-07-15 10:29:02.780116] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9b6310 00:21:38.224 [2024-07-15 10:29:02.780178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.224 "name": "raid_bdev1", 00:21:38.224 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:38.224 "strip_size_kb": 0, 00:21:38.224 "state": "online", 00:21:38.224 "raid_level": "raid1", 00:21:38.224 "superblock": true, 00:21:38.224 "num_base_bdevs": 2, 00:21:38.224 "num_base_bdevs_discovered": 2, 00:21:38.224 "num_base_bdevs_operational": 2, 00:21:38.224 "base_bdevs_list": [ 00:21:38.224 { 00:21:38.224 "name": "pt1", 00:21:38.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:38.224 "is_configured": true, 00:21:38.224 "data_offset": 256, 00:21:38.224 "data_size": 7936 00:21:38.224 }, 00:21:38.224 { 00:21:38.224 "name": "pt2", 00:21:38.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.224 "is_configured": true, 00:21:38.224 "data_offset": 256, 00:21:38.224 "data_size": 7936 00:21:38.224 } 00:21:38.224 ] 00:21:38.224 }' 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.224 10:29:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:38.792 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:39.051 [2024-07-15 10:29:03.597173] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.051 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:39.051 "name": "raid_bdev1", 00:21:39.051 "aliases": [ 00:21:39.051 "e14955c3-923b-44c1-81bf-c9b4f648e898" 00:21:39.051 ], 00:21:39.051 "product_name": "Raid Volume", 00:21:39.051 "block_size": 4096, 00:21:39.051 "num_blocks": 7936, 00:21:39.051 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:39.051 "assigned_rate_limits": { 00:21:39.051 "rw_ios_per_sec": 0, 00:21:39.051 "rw_mbytes_per_sec": 0, 00:21:39.051 "r_mbytes_per_sec": 0, 00:21:39.051 "w_mbytes_per_sec": 0 00:21:39.051 }, 00:21:39.051 "claimed": false, 00:21:39.051 "zoned": false, 00:21:39.051 "supported_io_types": { 00:21:39.051 "read": true, 00:21:39.051 "write": true, 00:21:39.051 "unmap": false, 00:21:39.051 "flush": false, 00:21:39.051 "reset": true, 00:21:39.051 "nvme_admin": false, 00:21:39.051 "nvme_io": false, 00:21:39.051 "nvme_io_md": false, 00:21:39.051 "write_zeroes": true, 00:21:39.051 "zcopy": false, 00:21:39.051 "get_zone_info": false, 00:21:39.051 "zone_management": false, 00:21:39.051 "zone_append": false, 00:21:39.051 "compare": false, 00:21:39.051 "compare_and_write": false, 00:21:39.051 "abort": false, 00:21:39.051 "seek_hole": false, 00:21:39.051 "seek_data": false, 00:21:39.051 "copy": false, 00:21:39.051 "nvme_iov_md": false 00:21:39.051 }, 00:21:39.051 "memory_domains": [ 00:21:39.051 { 00:21:39.051 "dma_device_id": "system", 00:21:39.051 "dma_device_type": 1 00:21:39.051 }, 00:21:39.051 { 00:21:39.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.051 "dma_device_type": 2 00:21:39.051 }, 00:21:39.051 { 00:21:39.051 "dma_device_id": "system", 00:21:39.051 "dma_device_type": 1 00:21:39.051 }, 00:21:39.051 { 00:21:39.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.051 "dma_device_type": 2 00:21:39.051 } 00:21:39.051 ], 00:21:39.051 "driver_specific": { 00:21:39.051 "raid": { 00:21:39.051 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:39.051 "strip_size_kb": 0, 00:21:39.051 "state": "online", 00:21:39.051 "raid_level": "raid1", 00:21:39.051 "superblock": true, 00:21:39.051 "num_base_bdevs": 2, 00:21:39.051 "num_base_bdevs_discovered": 2, 00:21:39.051 "num_base_bdevs_operational": 2, 00:21:39.051 "base_bdevs_list": [ 00:21:39.051 { 00:21:39.052 "name": "pt1", 00:21:39.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.052 "is_configured": true, 00:21:39.052 "data_offset": 256, 00:21:39.052 "data_size": 7936 00:21:39.052 }, 00:21:39.052 { 00:21:39.052 "name": "pt2", 00:21:39.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.052 "is_configured": true, 00:21:39.052 "data_offset": 256, 00:21:39.052 "data_size": 7936 00:21:39.052 } 00:21:39.052 ] 00:21:39.052 } 00:21:39.052 } 00:21:39.052 }' 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:39.052 pt2' 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.052 "name": "pt1", 00:21:39.052 "aliases": [ 00:21:39.052 "00000000-0000-0000-0000-000000000001" 00:21:39.052 ], 00:21:39.052 "product_name": "passthru", 00:21:39.052 "block_size": 4096, 00:21:39.052 "num_blocks": 8192, 00:21:39.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.052 "assigned_rate_limits": { 00:21:39.052 "rw_ios_per_sec": 0, 00:21:39.052 "rw_mbytes_per_sec": 0, 00:21:39.052 "r_mbytes_per_sec": 0, 00:21:39.052 "w_mbytes_per_sec": 0 00:21:39.052 }, 00:21:39.052 "claimed": true, 00:21:39.052 "claim_type": "exclusive_write", 00:21:39.052 "zoned": false, 00:21:39.052 "supported_io_types": { 00:21:39.052 "read": true, 00:21:39.052 "write": true, 00:21:39.052 "unmap": true, 00:21:39.052 "flush": true, 00:21:39.052 "reset": true, 00:21:39.052 "nvme_admin": false, 00:21:39.052 "nvme_io": false, 00:21:39.052 "nvme_io_md": false, 00:21:39.052 "write_zeroes": true, 00:21:39.052 "zcopy": true, 00:21:39.052 "get_zone_info": false, 00:21:39.052 "zone_management": false, 00:21:39.052 "zone_append": false, 00:21:39.052 "compare": false, 00:21:39.052 "compare_and_write": false, 00:21:39.052 "abort": true, 00:21:39.052 "seek_hole": false, 00:21:39.052 "seek_data": false, 00:21:39.052 "copy": true, 00:21:39.052 "nvme_iov_md": false 00:21:39.052 }, 00:21:39.052 "memory_domains": [ 00:21:39.052 { 00:21:39.052 "dma_device_id": "system", 00:21:39.052 "dma_device_type": 1 00:21:39.052 }, 00:21:39.052 { 00:21:39.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.052 "dma_device_type": 2 00:21:39.052 } 00:21:39.052 ], 00:21:39.052 "driver_specific": { 00:21:39.052 "passthru": { 00:21:39.052 "name": "pt1", 00:21:39.052 "base_bdev_name": "malloc1" 00:21:39.052 } 00:21:39.052 } 00:21:39.052 }' 00:21:39.052 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.350 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.350 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:39.350 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.350 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.350 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:39.350 10:29:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.350 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.350 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.350 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.350 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:39.635 "name": "pt2", 00:21:39.635 "aliases": [ 00:21:39.635 "00000000-0000-0000-0000-000000000002" 00:21:39.635 ], 00:21:39.635 "product_name": "passthru", 00:21:39.635 "block_size": 4096, 00:21:39.635 "num_blocks": 8192, 00:21:39.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.635 "assigned_rate_limits": { 00:21:39.635 "rw_ios_per_sec": 0, 00:21:39.635 "rw_mbytes_per_sec": 0, 00:21:39.635 "r_mbytes_per_sec": 0, 00:21:39.635 "w_mbytes_per_sec": 0 00:21:39.635 }, 00:21:39.635 "claimed": true, 00:21:39.635 "claim_type": "exclusive_write", 00:21:39.635 "zoned": false, 00:21:39.635 "supported_io_types": { 00:21:39.635 "read": true, 00:21:39.635 "write": true, 00:21:39.635 "unmap": true, 00:21:39.635 "flush": true, 00:21:39.635 "reset": true, 00:21:39.635 "nvme_admin": false, 00:21:39.635 "nvme_io": false, 00:21:39.635 "nvme_io_md": false, 00:21:39.635 "write_zeroes": true, 00:21:39.635 "zcopy": true, 00:21:39.635 "get_zone_info": false, 00:21:39.635 "zone_management": false, 00:21:39.635 "zone_append": false, 00:21:39.635 "compare": false, 00:21:39.635 "compare_and_write": false, 00:21:39.635 "abort": true, 00:21:39.635 "seek_hole": false, 00:21:39.635 "seek_data": false, 00:21:39.635 "copy": true, 00:21:39.635 "nvme_iov_md": false 00:21:39.635 }, 00:21:39.635 "memory_domains": [ 00:21:39.635 { 00:21:39.635 "dma_device_id": "system", 00:21:39.635 "dma_device_type": 1 00:21:39.635 }, 00:21:39.635 { 00:21:39.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.635 "dma_device_type": 2 00:21:39.635 } 00:21:39.635 ], 00:21:39.635 "driver_specific": { 00:21:39.635 "passthru": { 00:21:39.635 "name": "pt2", 00:21:39.635 "base_bdev_name": "malloc2" 00:21:39.635 } 00:21:39.635 } 00:21:39.635 }' 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:39.635 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:39.894 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:40.154 [2024-07-15 10:29:04.816302] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.154 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e14955c3-923b-44c1-81bf-c9b4f648e898 00:21:40.154 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z e14955c3-923b-44c1-81bf-c9b4f648e898 ']' 00:21:40.154 10:29:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:40.413 [2024-07-15 10:29:04.988594] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.413 [2024-07-15 10:29:04.988608] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.413 [2024-07-15 10:29:04.988643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.413 [2024-07-15 10:29:04.988678] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.413 [2024-07-15 10:29:04.988691] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9b6310 name raid_bdev1, state offline 00:21:40.413 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:40.413 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.413 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:40.413 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:40.413 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:40.413 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:40.672 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:40.672 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:40.931 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:40.932 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:21:41.190 [2024-07-15 10:29:05.814699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:41.190 [2024-07-15 10:29:05.815625] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:41.190 [2024-07-15 10:29:05.815665] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:41.190 [2024-07-15 10:29:05.815693] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:41.191 [2024-07-15 10:29:05.815726] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:41.191 [2024-07-15 10:29:05.815732] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9bf3f0 name raid_bdev1, state configuring 00:21:41.191 request: 00:21:41.191 { 00:21:41.191 "name": "raid_bdev1", 00:21:41.191 "raid_level": "raid1", 00:21:41.191 "base_bdevs": [ 00:21:41.191 "malloc1", 00:21:41.191 "malloc2" 00:21:41.191 ], 00:21:41.191 "superblock": false, 00:21:41.191 "method": "bdev_raid_create", 00:21:41.191 "req_id": 1 00:21:41.191 } 00:21:41.191 Got JSON-RPC error response 00:21:41.191 response: 00:21:41.191 { 00:21:41.191 "code": -17, 00:21:41.191 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:41.191 } 00:21:41.191 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:21:41.191 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.191 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.191 10:29:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.191 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:41.191 10:29:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:41.450 [2024-07-15 10:29:06.143514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:41.450 [2024-07-15 10:29:06.143542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.450 [2024-07-15 10:29:06.143554] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9bfd70 00:21:41.450 [2024-07-15 10:29:06.143578] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.450 [2024-07-15 10:29:06.144702] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.450 [2024-07-15 10:29:06.144724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:41.450 [2024-07-15 10:29:06.144769] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:41.450 [2024-07-15 10:29:06.144787] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:41.450 pt1 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.450 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.709 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.709 "name": "raid_bdev1", 00:21:41.709 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:41.709 "strip_size_kb": 0, 00:21:41.709 "state": "configuring", 00:21:41.709 "raid_level": "raid1", 00:21:41.709 "superblock": true, 00:21:41.709 "num_base_bdevs": 2, 00:21:41.709 "num_base_bdevs_discovered": 1, 00:21:41.709 "num_base_bdevs_operational": 2, 00:21:41.709 "base_bdevs_list": [ 00:21:41.709 { 00:21:41.709 "name": "pt1", 00:21:41.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.709 "is_configured": true, 00:21:41.709 "data_offset": 256, 00:21:41.709 "data_size": 7936 00:21:41.709 }, 00:21:41.709 { 00:21:41.709 "name": null, 00:21:41.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.709 "is_configured": false, 00:21:41.709 "data_offset": 256, 00:21:41.709 "data_size": 7936 00:21:41.709 } 00:21:41.709 ] 00:21:41.709 }' 00:21:41.709 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.709 10:29:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.278 [2024-07-15 10:29:06.965641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.278 [2024-07-15 10:29:06.965680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.278 [2024-07-15 10:29:06.965709] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b6bb0 00:21:42.278 [2024-07-15 10:29:06.965717] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.278 [2024-07-15 10:29:06.965976] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.278 [2024-07-15 10:29:06.965990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.278 [2024-07-15 10:29:06.966034] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.278 [2024-07-15 10:29:06.966046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.278 [2024-07-15 10:29:06.966111] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x9b4de0 00:21:42.278 [2024-07-15 10:29:06.966118] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:42.278 [2024-07-15 10:29:06.966227] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x815eb0 00:21:42.278 [2024-07-15 10:29:06.966314] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9b4de0 00:21:42.278 [2024-07-15 10:29:06.966321] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9b4de0 00:21:42.278 [2024-07-15 10:29:06.966384] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.278 pt2 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.278 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.279 10:29:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.548 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.548 "name": "raid_bdev1", 00:21:42.548 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:42.548 "strip_size_kb": 0, 00:21:42.548 "state": "online", 00:21:42.548 "raid_level": "raid1", 00:21:42.548 "superblock": true, 00:21:42.548 "num_base_bdevs": 2, 00:21:42.548 "num_base_bdevs_discovered": 2, 00:21:42.548 "num_base_bdevs_operational": 2, 00:21:42.548 "base_bdevs_list": [ 00:21:42.548 { 00:21:42.548 "name": "pt1", 00:21:42.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.548 "is_configured": true, 00:21:42.548 "data_offset": 256, 00:21:42.548 "data_size": 7936 00:21:42.548 }, 00:21:42.548 { 00:21:42.548 "name": "pt2", 00:21:42.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.548 "is_configured": true, 00:21:42.548 "data_offset": 256, 00:21:42.548 "data_size": 7936 00:21:42.548 } 00:21:42.548 ] 00:21:42.548 }' 00:21:42.548 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.548 10:29:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:43.118 [2024-07-15 10:29:07.771876] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:43.118 "name": "raid_bdev1", 00:21:43.118 "aliases": [ 00:21:43.118 "e14955c3-923b-44c1-81bf-c9b4f648e898" 00:21:43.118 ], 00:21:43.118 "product_name": "Raid Volume", 00:21:43.118 "block_size": 4096, 00:21:43.118 "num_blocks": 7936, 00:21:43.118 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:43.118 "assigned_rate_limits": { 00:21:43.118 "rw_ios_per_sec": 0, 00:21:43.118 "rw_mbytes_per_sec": 0, 00:21:43.118 "r_mbytes_per_sec": 0, 00:21:43.118 "w_mbytes_per_sec": 0 00:21:43.118 }, 00:21:43.118 "claimed": false, 00:21:43.118 "zoned": false, 00:21:43.118 "supported_io_types": { 00:21:43.118 "read": true, 00:21:43.118 "write": true, 00:21:43.118 "unmap": false, 00:21:43.118 "flush": false, 00:21:43.118 "reset": true, 00:21:43.118 "nvme_admin": false, 00:21:43.118 "nvme_io": false, 00:21:43.118 "nvme_io_md": false, 00:21:43.118 "write_zeroes": true, 00:21:43.118 "zcopy": false, 00:21:43.118 "get_zone_info": false, 00:21:43.118 "zone_management": false, 00:21:43.118 "zone_append": false, 00:21:43.118 "compare": false, 00:21:43.118 "compare_and_write": false, 00:21:43.118 "abort": false, 00:21:43.118 "seek_hole": false, 00:21:43.118 "seek_data": false, 00:21:43.118 "copy": false, 00:21:43.118 "nvme_iov_md": false 00:21:43.118 }, 00:21:43.118 "memory_domains": [ 00:21:43.118 { 00:21:43.118 "dma_device_id": "system", 00:21:43.118 "dma_device_type": 1 00:21:43.118 }, 00:21:43.118 { 00:21:43.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.118 "dma_device_type": 2 00:21:43.118 }, 00:21:43.118 { 00:21:43.118 "dma_device_id": "system", 00:21:43.118 "dma_device_type": 1 00:21:43.118 }, 00:21:43.118 { 00:21:43.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.118 "dma_device_type": 2 00:21:43.118 } 00:21:43.118 ], 00:21:43.118 "driver_specific": { 00:21:43.118 "raid": { 00:21:43.118 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:43.118 "strip_size_kb": 0, 00:21:43.118 "state": "online", 00:21:43.118 "raid_level": "raid1", 00:21:43.118 "superblock": true, 00:21:43.118 "num_base_bdevs": 2, 00:21:43.118 "num_base_bdevs_discovered": 2, 00:21:43.118 "num_base_bdevs_operational": 2, 00:21:43.118 "base_bdevs_list": [ 00:21:43.118 { 00:21:43.118 "name": "pt1", 00:21:43.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.118 "is_configured": true, 00:21:43.118 "data_offset": 256, 00:21:43.118 "data_size": 7936 00:21:43.118 }, 00:21:43.118 { 00:21:43.118 "name": "pt2", 00:21:43.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.118 "is_configured": true, 00:21:43.118 "data_offset": 256, 00:21:43.118 "data_size": 7936 00:21:43.118 } 00:21:43.118 ] 00:21:43.118 } 00:21:43.118 } 00:21:43.118 }' 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:43.118 pt2' 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:43.118 10:29:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.376 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.376 "name": "pt1", 00:21:43.376 "aliases": [ 00:21:43.376 "00000000-0000-0000-0000-000000000001" 00:21:43.376 ], 00:21:43.376 "product_name": "passthru", 00:21:43.376 "block_size": 4096, 00:21:43.376 "num_blocks": 8192, 00:21:43.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.376 "assigned_rate_limits": { 00:21:43.377 "rw_ios_per_sec": 0, 00:21:43.377 "rw_mbytes_per_sec": 0, 00:21:43.377 "r_mbytes_per_sec": 0, 00:21:43.377 "w_mbytes_per_sec": 0 00:21:43.377 }, 00:21:43.377 "claimed": true, 00:21:43.377 "claim_type": "exclusive_write", 00:21:43.377 "zoned": false, 00:21:43.377 "supported_io_types": { 00:21:43.377 "read": true, 00:21:43.377 "write": true, 00:21:43.377 "unmap": true, 00:21:43.377 "flush": true, 00:21:43.377 "reset": true, 00:21:43.377 "nvme_admin": false, 00:21:43.377 "nvme_io": false, 00:21:43.377 "nvme_io_md": false, 00:21:43.377 "write_zeroes": true, 00:21:43.377 "zcopy": true, 00:21:43.377 "get_zone_info": false, 00:21:43.377 "zone_management": false, 00:21:43.377 "zone_append": false, 00:21:43.377 "compare": false, 00:21:43.377 "compare_and_write": false, 00:21:43.377 "abort": true, 00:21:43.377 "seek_hole": false, 00:21:43.377 "seek_data": false, 00:21:43.377 "copy": true, 00:21:43.377 "nvme_iov_md": false 00:21:43.377 }, 00:21:43.377 "memory_domains": [ 00:21:43.377 { 00:21:43.377 "dma_device_id": "system", 00:21:43.377 "dma_device_type": 1 00:21:43.377 }, 00:21:43.377 { 00:21:43.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.377 "dma_device_type": 2 00:21:43.377 } 00:21:43.377 ], 00:21:43.377 "driver_specific": { 00:21:43.377 "passthru": { 00:21:43.377 "name": "pt1", 00:21:43.377 "base_bdev_name": "malloc1" 00:21:43.377 } 00:21:43.377 } 00:21:43.377 }' 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.377 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:43.636 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.894 "name": "pt2", 00:21:43.894 "aliases": [ 00:21:43.894 "00000000-0000-0000-0000-000000000002" 00:21:43.894 ], 00:21:43.894 "product_name": "passthru", 00:21:43.894 "block_size": 4096, 00:21:43.894 "num_blocks": 8192, 00:21:43.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.894 "assigned_rate_limits": { 00:21:43.894 "rw_ios_per_sec": 0, 00:21:43.894 "rw_mbytes_per_sec": 0, 00:21:43.894 "r_mbytes_per_sec": 0, 00:21:43.894 "w_mbytes_per_sec": 0 00:21:43.894 }, 00:21:43.894 "claimed": true, 00:21:43.894 "claim_type": "exclusive_write", 00:21:43.894 "zoned": false, 00:21:43.894 "supported_io_types": { 00:21:43.894 "read": true, 00:21:43.894 "write": true, 00:21:43.894 "unmap": true, 00:21:43.894 "flush": true, 00:21:43.894 "reset": true, 00:21:43.894 "nvme_admin": false, 00:21:43.894 "nvme_io": false, 00:21:43.894 "nvme_io_md": false, 00:21:43.894 "write_zeroes": true, 00:21:43.894 "zcopy": true, 00:21:43.894 "get_zone_info": false, 00:21:43.894 "zone_management": false, 00:21:43.894 "zone_append": false, 00:21:43.894 "compare": false, 00:21:43.894 "compare_and_write": false, 00:21:43.894 "abort": true, 00:21:43.894 "seek_hole": false, 00:21:43.894 "seek_data": false, 00:21:43.894 "copy": true, 00:21:43.894 "nvme_iov_md": false 00:21:43.894 }, 00:21:43.894 "memory_domains": [ 00:21:43.894 { 00:21:43.894 "dma_device_id": "system", 00:21:43.894 "dma_device_type": 1 00:21:43.894 }, 00:21:43.894 { 00:21:43.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.894 "dma_device_type": 2 00:21:43.894 } 00:21:43.894 ], 00:21:43.894 "driver_specific": { 00:21:43.894 "passthru": { 00:21:43.894 "name": "pt2", 00:21:43.894 "base_bdev_name": "malloc2" 00:21:43.894 } 00:21:43.894 } 00:21:43.894 }' 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.894 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:44.153 [2024-07-15 10:29:08.914791] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' e14955c3-923b-44c1-81bf-c9b4f648e898 '!=' e14955c3-923b-44c1-81bf-c9b4f648e898 ']' 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:21:44.153 10:29:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:44.412 [2024-07-15 10:29:09.091125] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.412 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.671 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.671 "name": "raid_bdev1", 00:21:44.671 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:44.671 "strip_size_kb": 0, 00:21:44.671 "state": "online", 00:21:44.671 "raid_level": "raid1", 00:21:44.671 "superblock": true, 00:21:44.671 "num_base_bdevs": 2, 00:21:44.671 "num_base_bdevs_discovered": 1, 00:21:44.671 "num_base_bdevs_operational": 1, 00:21:44.671 "base_bdevs_list": [ 00:21:44.671 { 00:21:44.671 "name": null, 00:21:44.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.671 "is_configured": false, 00:21:44.671 "data_offset": 256, 00:21:44.671 "data_size": 7936 00:21:44.671 }, 00:21:44.671 { 00:21:44.671 "name": "pt2", 00:21:44.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.671 "is_configured": true, 00:21:44.671 "data_offset": 256, 00:21:44.671 "data_size": 7936 00:21:44.671 } 00:21:44.671 ] 00:21:44.671 }' 00:21:44.671 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.671 10:29:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.240 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:45.240 [2024-07-15 10:29:09.945310] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.240 [2024-07-15 10:29:09.945331] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.240 [2024-07-15 10:29:09.945367] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.240 [2024-07-15 10:29:09.945398] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.240 [2024-07-15 10:29:09.945406] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9b4de0 name raid_bdev1, state offline 00:21:45.240 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.240 10:29:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:45.499 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:45.499 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:45.499 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:45.499 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:45.499 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:45.758 [2024-07-15 10:29:10.466777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:45.758 [2024-07-15 10:29:10.466813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.758 [2024-07-15 10:29:10.466825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b3f90 00:21:45.758 [2024-07-15 10:29:10.466834] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.758 [2024-07-15 10:29:10.467971] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.758 [2024-07-15 10:29:10.467991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:45.758 [2024-07-15 10:29:10.468033] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:45.758 [2024-07-15 10:29:10.468051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:45.758 [2024-07-15 10:29:10.468109] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x814b40 00:21:45.758 [2024-07-15 10:29:10.468120] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:45.758 [2024-07-15 10:29:10.468228] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9c0810 00:21:45.758 [2024-07-15 10:29:10.468305] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x814b40 00:21:45.758 [2024-07-15 10:29:10.468311] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x814b40 00:21:45.758 [2024-07-15 10:29:10.468374] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.758 pt2 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.758 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.018 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.018 "name": "raid_bdev1", 00:21:46.018 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:46.018 "strip_size_kb": 0, 00:21:46.018 "state": "online", 00:21:46.018 "raid_level": "raid1", 00:21:46.018 "superblock": true, 00:21:46.018 "num_base_bdevs": 2, 00:21:46.018 "num_base_bdevs_discovered": 1, 00:21:46.018 "num_base_bdevs_operational": 1, 00:21:46.018 "base_bdevs_list": [ 00:21:46.018 { 00:21:46.018 "name": null, 00:21:46.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.018 "is_configured": false, 00:21:46.018 "data_offset": 256, 00:21:46.018 "data_size": 7936 00:21:46.018 }, 00:21:46.018 { 00:21:46.018 "name": "pt2", 00:21:46.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.018 "is_configured": true, 00:21:46.018 "data_offset": 256, 00:21:46.018 "data_size": 7936 00:21:46.018 } 00:21:46.018 ] 00:21:46.018 }' 00:21:46.018 10:29:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.018 10:29:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.586 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.586 [2024-07-15 10:29:11.292894] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.586 [2024-07-15 10:29:11.292922] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.586 [2024-07-15 10:29:11.292955] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.586 [2024-07-15 10:29:11.292982] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.586 [2024-07-15 10:29:11.292989] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x814b40 name raid_bdev1, state offline 00:21:46.586 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.586 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:46.846 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:46.846 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:46.846 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:21:46.846 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.846 [2024-07-15 10:29:11.633773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.846 [2024-07-15 10:29:11.633805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.846 [2024-07-15 10:29:11.633835] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b68d0 00:21:46.846 [2024-07-15 10:29:11.633843] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.106 [2024-07-15 10:29:11.635005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.106 [2024-07-15 10:29:11.635027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:47.106 [2024-07-15 10:29:11.635073] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:47.106 [2024-07-15 10:29:11.635091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:47.106 [2024-07-15 10:29:11.635160] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:47.106 [2024-07-15 10:29:11.635168] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:47.106 [2024-07-15 10:29:11.635177] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x815690 name raid_bdev1, state configuring 00:21:47.106 [2024-07-15 10:29:11.635193] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.106 [2024-07-15 10:29:11.635232] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8141e0 00:21:47.106 [2024-07-15 10:29:11.635238] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:47.106 [2024-07-15 10:29:11.635352] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x81c990 00:21:47.106 [2024-07-15 10:29:11.635433] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8141e0 00:21:47.106 [2024-07-15 10:29:11.635440] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8141e0 00:21:47.106 [2024-07-15 10:29:11.635505] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.106 pt1 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.106 "name": "raid_bdev1", 00:21:47.106 "uuid": "e14955c3-923b-44c1-81bf-c9b4f648e898", 00:21:47.106 "strip_size_kb": 0, 00:21:47.106 "state": "online", 00:21:47.106 "raid_level": "raid1", 00:21:47.106 "superblock": true, 00:21:47.106 "num_base_bdevs": 2, 00:21:47.106 "num_base_bdevs_discovered": 1, 00:21:47.106 "num_base_bdevs_operational": 1, 00:21:47.106 "base_bdevs_list": [ 00:21:47.106 { 00:21:47.106 "name": null, 00:21:47.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.106 "is_configured": false, 00:21:47.106 "data_offset": 256, 00:21:47.106 "data_size": 7936 00:21:47.106 }, 00:21:47.106 { 00:21:47.106 "name": "pt2", 00:21:47.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.106 "is_configured": true, 00:21:47.106 "data_offset": 256, 00:21:47.106 "data_size": 7936 00:21:47.106 } 00:21:47.106 ] 00:21:47.106 }' 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.106 10:29:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.674 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:47.674 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:47.674 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:47.674 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:47.674 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:47.934 [2024-07-15 10:29:12.568324] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' e14955c3-923b-44c1-81bf-c9b4f648e898 '!=' e14955c3-923b-44c1-81bf-c9b4f648e898 ']' 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 1884026 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 1884026 ']' 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 1884026 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884026 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884026' 00:21:47.934 killing process with pid 1884026 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 1884026 00:21:47.934 [2024-07-15 10:29:12.637229] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:47.934 [2024-07-15 10:29:12.637265] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:47.934 [2024-07-15 10:29:12.637294] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:47.934 [2024-07-15 10:29:12.637303] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8141e0 name raid_bdev1, state offline 00:21:47.934 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 1884026 00:21:47.934 [2024-07-15 10:29:12.652400] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.194 10:29:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:21:48.194 00:21:48.194 real 0m11.689s 00:21:48.194 user 0m21.036s 00:21:48.194 sys 0m2.272s 00:21:48.194 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.194 10:29:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.194 ************************************ 00:21:48.194 END TEST raid_superblock_test_4k 00:21:48.194 ************************************ 00:21:48.194 10:29:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:48.194 10:29:12 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:21:48.194 10:29:12 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:48.194 10:29:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:21:48.194 10:29:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.194 10:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.194 ************************************ 00:21:48.194 START TEST raid_rebuild_test_sb_4k 00:21:48.194 ************************************ 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:48.194 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=1886413 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 1886413 /var/tmp/spdk-raid.sock 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 1886413 ']' 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:48.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.195 10:29:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.195 [2024-07-15 10:29:12.966418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:48.195 [2024-07-15 10:29:12.966466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886413 ] 00:21:48.195 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:48.195 Zero copy mechanism will not be used. 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:48.454 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.454 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:48.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:48.455 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:48.455 [2024-07-15 10:29:13.057154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.455 [2024-07-15 10:29:13.127566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.455 [2024-07-15 10:29:13.179996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.455 [2024-07-15 10:29:13.180027] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.020 10:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.020 10:29:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:21:49.020 10:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:49.020 10:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:49.277 BaseBdev1_malloc 00:21:49.277 10:29:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:49.277 [2024-07-15 10:29:14.056267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:49.277 [2024-07-15 10:29:14.056304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.277 [2024-07-15 10:29:14.056320] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc465f0 00:21:49.277 [2024-07-15 10:29:14.056329] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.277 [2024-07-15 10:29:14.057439] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.277 [2024-07-15 10:29:14.057462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:49.277 BaseBdev1 00:21:49.534 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:21:49.534 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:49.534 BaseBdev2_malloc 00:21:49.534 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:49.792 [2024-07-15 10:29:14.404894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:49.792 [2024-07-15 10:29:14.404946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.792 [2024-07-15 10:29:14.404960] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdea130 00:21:49.792 [2024-07-15 10:29:14.404969] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.792 [2024-07-15 10:29:14.405978] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.792 [2024-07-15 10:29:14.406000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:49.792 BaseBdev2 00:21:49.792 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:21:49.792 spare_malloc 00:21:50.051 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:50.051 spare_delay 00:21:50.051 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:50.309 [2024-07-15 10:29:14.917604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:50.309 [2024-07-15 10:29:14.917632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.309 [2024-07-15 10:29:14.917644] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xde9770 00:21:50.309 [2024-07-15 10:29:14.917667] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.309 [2024-07-15 10:29:14.918580] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.309 [2024-07-15 10:29:14.918601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:50.309 spare 00:21:50.309 10:29:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:50.309 [2024-07-15 10:29:15.086058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:50.309 [2024-07-15 10:29:15.086824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.309 [2024-07-15 10:29:15.086933] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc3e270 00:21:50.309 [2024-07-15 10:29:15.086943] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:50.309 [2024-07-15 10:29:15.087053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdea3c0 00:21:50.309 [2024-07-15 10:29:15.087140] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc3e270 00:21:50.309 [2024-07-15 10:29:15.087146] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc3e270 00:21:50.309 [2024-07-15 10:29:15.087205] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.568 "name": "raid_bdev1", 00:21:50.568 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:50.568 "strip_size_kb": 0, 00:21:50.568 "state": "online", 00:21:50.568 "raid_level": "raid1", 00:21:50.568 "superblock": true, 00:21:50.568 "num_base_bdevs": 2, 00:21:50.568 "num_base_bdevs_discovered": 2, 00:21:50.568 "num_base_bdevs_operational": 2, 00:21:50.568 "base_bdevs_list": [ 00:21:50.568 { 00:21:50.568 "name": "BaseBdev1", 00:21:50.568 "uuid": "3098426a-2670-57da-ac37-468c418a0a1b", 00:21:50.568 "is_configured": true, 00:21:50.568 "data_offset": 256, 00:21:50.568 "data_size": 7936 00:21:50.568 }, 00:21:50.568 { 00:21:50.568 "name": "BaseBdev2", 00:21:50.568 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:50.568 "is_configured": true, 00:21:50.568 "data_offset": 256, 00:21:50.568 "data_size": 7936 00:21:50.568 } 00:21:50.568 ] 00:21:50.568 }' 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.568 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.136 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:21:51.136 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:51.136 [2024-07-15 10:29:15.872226] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.136 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:21:51.136 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:51.136 10:29:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:51.395 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:51.654 [2024-07-15 10:29:16.204962] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdea3c0 00:21:51.654 /dev/nbd0 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:51.654 1+0 records in 00:21:51.654 1+0 records out 00:21:51.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258923 s, 15.8 MB/s 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:21:51.654 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:52.230 7936+0 records in 00:21:52.230 7936+0 records out 00:21:52.230 32505856 bytes (33 MB, 31 MiB) copied, 0.483209 s, 67.3 MB/s 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:52.230 [2024-07-15 10:29:16.948255] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:52.230 10:29:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:52.523 [2024-07-15 10:29:17.112707] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.523 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.782 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.782 "name": "raid_bdev1", 00:21:52.782 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:52.782 "strip_size_kb": 0, 00:21:52.782 "state": "online", 00:21:52.782 "raid_level": "raid1", 00:21:52.782 "superblock": true, 00:21:52.782 "num_base_bdevs": 2, 00:21:52.782 "num_base_bdevs_discovered": 1, 00:21:52.782 "num_base_bdevs_operational": 1, 00:21:52.782 "base_bdevs_list": [ 00:21:52.782 { 00:21:52.782 "name": null, 00:21:52.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.782 "is_configured": false, 00:21:52.782 "data_offset": 256, 00:21:52.782 "data_size": 7936 00:21:52.782 }, 00:21:52.782 { 00:21:52.782 "name": "BaseBdev2", 00:21:52.782 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:52.782 "is_configured": true, 00:21:52.782 "data_offset": 256, 00:21:52.782 "data_size": 7936 00:21:52.782 } 00:21:52.782 ] 00:21:52.782 }' 00:21:52.782 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.782 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.041 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:53.299 [2024-07-15 10:29:17.950870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:53.299 [2024-07-15 10:29:17.955218] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdde8f0 00:21:53.299 [2024-07-15 10:29:17.956752] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.299 10:29:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.231 10:29:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.489 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:54.489 "name": "raid_bdev1", 00:21:54.489 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:54.489 "strip_size_kb": 0, 00:21:54.489 "state": "online", 00:21:54.489 "raid_level": "raid1", 00:21:54.489 "superblock": true, 00:21:54.489 "num_base_bdevs": 2, 00:21:54.489 "num_base_bdevs_discovered": 2, 00:21:54.489 "num_base_bdevs_operational": 2, 00:21:54.489 "process": { 00:21:54.489 "type": "rebuild", 00:21:54.489 "target": "spare", 00:21:54.489 "progress": { 00:21:54.489 "blocks": 2816, 00:21:54.489 "percent": 35 00:21:54.489 } 00:21:54.489 }, 00:21:54.489 "base_bdevs_list": [ 00:21:54.489 { 00:21:54.489 "name": "spare", 00:21:54.489 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:21:54.489 "is_configured": true, 00:21:54.489 "data_offset": 256, 00:21:54.489 "data_size": 7936 00:21:54.489 }, 00:21:54.489 { 00:21:54.489 "name": "BaseBdev2", 00:21:54.489 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:54.489 "is_configured": true, 00:21:54.489 "data_offset": 256, 00:21:54.489 "data_size": 7936 00:21:54.489 } 00:21:54.489 ] 00:21:54.489 }' 00:21:54.489 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:54.489 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.489 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:54.489 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.489 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:54.748 [2024-07-15 10:29:19.379265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.748 [2024-07-15 10:29:19.467176] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:54.748 [2024-07-15 10:29:19.467208] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.748 [2024-07-15 10:29:19.467217] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.748 [2024-07-15 10:29:19.467222] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.748 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.007 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.007 "name": "raid_bdev1", 00:21:55.007 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:55.007 "strip_size_kb": 0, 00:21:55.007 "state": "online", 00:21:55.007 "raid_level": "raid1", 00:21:55.007 "superblock": true, 00:21:55.007 "num_base_bdevs": 2, 00:21:55.007 "num_base_bdevs_discovered": 1, 00:21:55.007 "num_base_bdevs_operational": 1, 00:21:55.007 "base_bdevs_list": [ 00:21:55.007 { 00:21:55.007 "name": null, 00:21:55.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.007 "is_configured": false, 00:21:55.007 "data_offset": 256, 00:21:55.007 "data_size": 7936 00:21:55.007 }, 00:21:55.007 { 00:21:55.007 "name": "BaseBdev2", 00:21:55.007 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:55.007 "is_configured": true, 00:21:55.007 "data_offset": 256, 00:21:55.007 "data_size": 7936 00:21:55.007 } 00:21:55.007 ] 00:21:55.007 }' 00:21:55.007 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.007 10:29:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:55.576 "name": "raid_bdev1", 00:21:55.576 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:55.576 "strip_size_kb": 0, 00:21:55.576 "state": "online", 00:21:55.576 "raid_level": "raid1", 00:21:55.576 "superblock": true, 00:21:55.576 "num_base_bdevs": 2, 00:21:55.576 "num_base_bdevs_discovered": 1, 00:21:55.576 "num_base_bdevs_operational": 1, 00:21:55.576 "base_bdevs_list": [ 00:21:55.576 { 00:21:55.576 "name": null, 00:21:55.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.576 "is_configured": false, 00:21:55.576 "data_offset": 256, 00:21:55.576 "data_size": 7936 00:21:55.576 }, 00:21:55.576 { 00:21:55.576 "name": "BaseBdev2", 00:21:55.576 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:55.576 "is_configured": true, 00:21:55.576 "data_offset": 256, 00:21:55.576 "data_size": 7936 00:21:55.576 } 00:21:55.576 ] 00:21:55.576 }' 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:21:55.576 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:55.835 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:55.835 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:55.835 [2024-07-15 10:29:20.561992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:55.835 [2024-07-15 10:29:20.566378] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdde8f0 00:21:55.835 [2024-07-15 10:29:20.567481] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:55.835 10:29:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:57.210 "name": "raid_bdev1", 00:21:57.210 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:57.210 "strip_size_kb": 0, 00:21:57.210 "state": "online", 00:21:57.210 "raid_level": "raid1", 00:21:57.210 "superblock": true, 00:21:57.210 "num_base_bdevs": 2, 00:21:57.210 "num_base_bdevs_discovered": 2, 00:21:57.210 "num_base_bdevs_operational": 2, 00:21:57.210 "process": { 00:21:57.210 "type": "rebuild", 00:21:57.210 "target": "spare", 00:21:57.210 "progress": { 00:21:57.210 "blocks": 2816, 00:21:57.210 "percent": 35 00:21:57.210 } 00:21:57.210 }, 00:21:57.210 "base_bdevs_list": [ 00:21:57.210 { 00:21:57.210 "name": "spare", 00:21:57.210 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:21:57.210 "is_configured": true, 00:21:57.210 "data_offset": 256, 00:21:57.210 "data_size": 7936 00:21:57.210 }, 00:21:57.210 { 00:21:57.210 "name": "BaseBdev2", 00:21:57.210 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:57.210 "is_configured": true, 00:21:57.210 "data_offset": 256, 00:21:57.210 "data_size": 7936 00:21:57.210 } 00:21:57.210 ] 00:21:57.210 }' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:21:57.210 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=782 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.210 10:29:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.469 10:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:57.469 "name": "raid_bdev1", 00:21:57.469 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:57.469 "strip_size_kb": 0, 00:21:57.469 "state": "online", 00:21:57.469 "raid_level": "raid1", 00:21:57.469 "superblock": true, 00:21:57.469 "num_base_bdevs": 2, 00:21:57.469 "num_base_bdevs_discovered": 2, 00:21:57.469 "num_base_bdevs_operational": 2, 00:21:57.469 "process": { 00:21:57.469 "type": "rebuild", 00:21:57.469 "target": "spare", 00:21:57.469 "progress": { 00:21:57.469 "blocks": 3584, 00:21:57.469 "percent": 45 00:21:57.469 } 00:21:57.469 }, 00:21:57.469 "base_bdevs_list": [ 00:21:57.469 { 00:21:57.469 "name": "spare", 00:21:57.469 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:21:57.469 "is_configured": true, 00:21:57.469 "data_offset": 256, 00:21:57.469 "data_size": 7936 00:21:57.469 }, 00:21:57.469 { 00:21:57.469 "name": "BaseBdev2", 00:21:57.469 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:57.469 "is_configured": true, 00:21:57.469 "data_offset": 256, 00:21:57.469 "data_size": 7936 00:21:57.469 } 00:21:57.469 ] 00:21:57.469 }' 00:21:57.469 10:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:57.469 10:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.469 10:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:57.469 10:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.469 10:29:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.405 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.663 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:58.663 "name": "raid_bdev1", 00:21:58.663 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:58.663 "strip_size_kb": 0, 00:21:58.663 "state": "online", 00:21:58.663 "raid_level": "raid1", 00:21:58.663 "superblock": true, 00:21:58.663 "num_base_bdevs": 2, 00:21:58.663 "num_base_bdevs_discovered": 2, 00:21:58.663 "num_base_bdevs_operational": 2, 00:21:58.663 "process": { 00:21:58.663 "type": "rebuild", 00:21:58.663 "target": "spare", 00:21:58.663 "progress": { 00:21:58.663 "blocks": 6656, 00:21:58.663 "percent": 83 00:21:58.663 } 00:21:58.663 }, 00:21:58.663 "base_bdevs_list": [ 00:21:58.663 { 00:21:58.663 "name": "spare", 00:21:58.663 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:21:58.663 "is_configured": true, 00:21:58.663 "data_offset": 256, 00:21:58.663 "data_size": 7936 00:21:58.663 }, 00:21:58.663 { 00:21:58.663 "name": "BaseBdev2", 00:21:58.663 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:58.663 "is_configured": true, 00:21:58.663 "data_offset": 256, 00:21:58.663 "data_size": 7936 00:21:58.663 } 00:21:58.663 ] 00:21:58.663 }' 00:21:58.663 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:58.663 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.663 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:58.663 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.663 10:29:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:21:58.921 [2024-07-15 10:29:23.688825] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:58.921 [2024-07-15 10:29:23.688868] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:58.921 [2024-07-15 10:29:23.688936] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:59.853 "name": "raid_bdev1", 00:21:59.853 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:21:59.853 "strip_size_kb": 0, 00:21:59.853 "state": "online", 00:21:59.853 "raid_level": "raid1", 00:21:59.853 "superblock": true, 00:21:59.853 "num_base_bdevs": 2, 00:21:59.853 "num_base_bdevs_discovered": 2, 00:21:59.853 "num_base_bdevs_operational": 2, 00:21:59.853 "base_bdevs_list": [ 00:21:59.853 { 00:21:59.853 "name": "spare", 00:21:59.853 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:21:59.853 "is_configured": true, 00:21:59.853 "data_offset": 256, 00:21:59.853 "data_size": 7936 00:21:59.853 }, 00:21:59.853 { 00:21:59.853 "name": "BaseBdev2", 00:21:59.853 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:21:59.853 "is_configured": true, 00:21:59.853 "data_offset": 256, 00:21:59.853 "data_size": 7936 00:21:59.853 } 00:21:59.853 ] 00:21:59.853 }' 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:21:59.853 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:21:59.854 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:21:59.854 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.854 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:00.111 "name": "raid_bdev1", 00:22:00.111 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:00.111 "strip_size_kb": 0, 00:22:00.111 "state": "online", 00:22:00.111 "raid_level": "raid1", 00:22:00.111 "superblock": true, 00:22:00.111 "num_base_bdevs": 2, 00:22:00.111 "num_base_bdevs_discovered": 2, 00:22:00.111 "num_base_bdevs_operational": 2, 00:22:00.111 "base_bdevs_list": [ 00:22:00.111 { 00:22:00.111 "name": "spare", 00:22:00.111 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:22:00.111 "is_configured": true, 00:22:00.111 "data_offset": 256, 00:22:00.111 "data_size": 7936 00:22:00.111 }, 00:22:00.111 { 00:22:00.111 "name": "BaseBdev2", 00:22:00.111 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:00.111 "is_configured": true, 00:22:00.111 "data_offset": 256, 00:22:00.111 "data_size": 7936 00:22:00.111 } 00:22:00.111 ] 00:22:00.111 }' 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.111 10:29:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.369 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.369 "name": "raid_bdev1", 00:22:00.369 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:00.369 "strip_size_kb": 0, 00:22:00.369 "state": "online", 00:22:00.369 "raid_level": "raid1", 00:22:00.369 "superblock": true, 00:22:00.369 "num_base_bdevs": 2, 00:22:00.369 "num_base_bdevs_discovered": 2, 00:22:00.369 "num_base_bdevs_operational": 2, 00:22:00.369 "base_bdevs_list": [ 00:22:00.369 { 00:22:00.369 "name": "spare", 00:22:00.369 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:22:00.369 "is_configured": true, 00:22:00.369 "data_offset": 256, 00:22:00.369 "data_size": 7936 00:22:00.369 }, 00:22:00.369 { 00:22:00.369 "name": "BaseBdev2", 00:22:00.369 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:00.369 "is_configured": true, 00:22:00.369 "data_offset": 256, 00:22:00.369 "data_size": 7936 00:22:00.369 } 00:22:00.369 ] 00:22:00.369 }' 00:22:00.369 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.369 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.935 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:00.935 [2024-07-15 10:29:25.657789] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:00.935 [2024-07-15 10:29:25.657810] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:00.935 [2024-07-15 10:29:25.657855] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.935 [2024-07-15 10:29:25.657894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.935 [2024-07-15 10:29:25.657907] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc3e270 name raid_bdev1, state offline 00:22:00.935 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.935 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.193 10:29:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:01.451 /dev/nbd0 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.451 1+0 records in 00:22:01.451 1+0 records out 00:22:01.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243525 s, 16.8 MB/s 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:01.451 /dev/nbd1 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:01.451 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:01.710 1+0 records in 00:22:01.710 1+0 records out 00:22:01.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285145 s, 14.4 MB/s 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.710 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:01.969 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:02.228 10:29:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:02.486 [2024-07-15 10:29:27.038374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:02.486 [2024-07-15 10:29:27.038408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.486 [2024-07-15 10:29:27.038422] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc3d940 00:22:02.486 [2024-07-15 10:29:27.038430] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.487 [2024-07-15 10:29:27.039551] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.487 [2024-07-15 10:29:27.039576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:02.487 [2024-07-15 10:29:27.039629] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:02.487 [2024-07-15 10:29:27.039652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:02.487 [2024-07-15 10:29:27.039719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.487 spare 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.487 [2024-07-15 10:29:27.140009] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc40770 00:22:02.487 [2024-07-15 10:29:27.140024] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:02.487 [2024-07-15 10:29:27.140158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdea3c0 00:22:02.487 [2024-07-15 10:29:27.140260] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc40770 00:22:02.487 [2024-07-15 10:29:27.140267] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc40770 00:22:02.487 [2024-07-15 10:29:27.140336] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.487 "name": "raid_bdev1", 00:22:02.487 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:02.487 "strip_size_kb": 0, 00:22:02.487 "state": "online", 00:22:02.487 "raid_level": "raid1", 00:22:02.487 "superblock": true, 00:22:02.487 "num_base_bdevs": 2, 00:22:02.487 "num_base_bdevs_discovered": 2, 00:22:02.487 "num_base_bdevs_operational": 2, 00:22:02.487 "base_bdevs_list": [ 00:22:02.487 { 00:22:02.487 "name": "spare", 00:22:02.487 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:22:02.487 "is_configured": true, 00:22:02.487 "data_offset": 256, 00:22:02.487 "data_size": 7936 00:22:02.487 }, 00:22:02.487 { 00:22:02.487 "name": "BaseBdev2", 00:22:02.487 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:02.487 "is_configured": true, 00:22:02.487 "data_offset": 256, 00:22:02.487 "data_size": 7936 00:22:02.487 } 00:22:02.487 ] 00:22:02.487 }' 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.487 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.053 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.312 "name": "raid_bdev1", 00:22:03.312 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:03.312 "strip_size_kb": 0, 00:22:03.312 "state": "online", 00:22:03.312 "raid_level": "raid1", 00:22:03.312 "superblock": true, 00:22:03.312 "num_base_bdevs": 2, 00:22:03.312 "num_base_bdevs_discovered": 2, 00:22:03.312 "num_base_bdevs_operational": 2, 00:22:03.312 "base_bdevs_list": [ 00:22:03.312 { 00:22:03.312 "name": "spare", 00:22:03.312 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:22:03.312 "is_configured": true, 00:22:03.312 "data_offset": 256, 00:22:03.312 "data_size": 7936 00:22:03.312 }, 00:22:03.312 { 00:22:03.312 "name": "BaseBdev2", 00:22:03.312 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:03.312 "is_configured": true, 00:22:03.312 "data_offset": 256, 00:22:03.312 "data_size": 7936 00:22:03.312 } 00:22:03.312 ] 00:22:03.312 }' 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.312 10:29:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:03.571 [2024-07-15 10:29:28.305693] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.571 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.830 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.830 "name": "raid_bdev1", 00:22:03.830 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:03.830 "strip_size_kb": 0, 00:22:03.830 "state": "online", 00:22:03.830 "raid_level": "raid1", 00:22:03.830 "superblock": true, 00:22:03.830 "num_base_bdevs": 2, 00:22:03.830 "num_base_bdevs_discovered": 1, 00:22:03.830 "num_base_bdevs_operational": 1, 00:22:03.830 "base_bdevs_list": [ 00:22:03.830 { 00:22:03.830 "name": null, 00:22:03.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.830 "is_configured": false, 00:22:03.830 "data_offset": 256, 00:22:03.830 "data_size": 7936 00:22:03.830 }, 00:22:03.830 { 00:22:03.830 "name": "BaseBdev2", 00:22:03.830 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:03.830 "is_configured": true, 00:22:03.830 "data_offset": 256, 00:22:03.830 "data_size": 7936 00:22:03.830 } 00:22:03.830 ] 00:22:03.830 }' 00:22:03.830 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.830 10:29:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.397 10:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:04.397 [2024-07-15 10:29:29.159913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.397 [2024-07-15 10:29:29.160024] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:04.397 [2024-07-15 10:29:29.160035] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:04.397 [2024-07-15 10:29:29.160056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.397 [2024-07-15 10:29:29.164340] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xdde8f0 00:22:04.397 [2024-07-15 10:29:29.165951] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:04.397 10:29:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:05.773 "name": "raid_bdev1", 00:22:05.773 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:05.773 "strip_size_kb": 0, 00:22:05.773 "state": "online", 00:22:05.773 "raid_level": "raid1", 00:22:05.773 "superblock": true, 00:22:05.773 "num_base_bdevs": 2, 00:22:05.773 "num_base_bdevs_discovered": 2, 00:22:05.773 "num_base_bdevs_operational": 2, 00:22:05.773 "process": { 00:22:05.773 "type": "rebuild", 00:22:05.773 "target": "spare", 00:22:05.773 "progress": { 00:22:05.773 "blocks": 2816, 00:22:05.773 "percent": 35 00:22:05.773 } 00:22:05.773 }, 00:22:05.773 "base_bdevs_list": [ 00:22:05.773 { 00:22:05.773 "name": "spare", 00:22:05.773 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:22:05.773 "is_configured": true, 00:22:05.773 "data_offset": 256, 00:22:05.773 "data_size": 7936 00:22:05.773 }, 00:22:05.773 { 00:22:05.773 "name": "BaseBdev2", 00:22:05.773 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:05.773 "is_configured": true, 00:22:05.773 "data_offset": 256, 00:22:05.773 "data_size": 7936 00:22:05.773 } 00:22:05.773 ] 00:22:05.773 }' 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.773 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:06.031 [2024-07-15 10:29:30.592556] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:06.031 [2024-07-15 10:29:30.676357] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:06.031 [2024-07-15 10:29:30.676388] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.031 [2024-07-15 10:29:30.676398] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:06.031 [2024-07-15 10:29:30.676403] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.031 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.032 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.294 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.294 "name": "raid_bdev1", 00:22:06.294 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:06.294 "strip_size_kb": 0, 00:22:06.295 "state": "online", 00:22:06.295 "raid_level": "raid1", 00:22:06.295 "superblock": true, 00:22:06.295 "num_base_bdevs": 2, 00:22:06.295 "num_base_bdevs_discovered": 1, 00:22:06.295 "num_base_bdevs_operational": 1, 00:22:06.295 "base_bdevs_list": [ 00:22:06.295 { 00:22:06.295 "name": null, 00:22:06.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.295 "is_configured": false, 00:22:06.295 "data_offset": 256, 00:22:06.295 "data_size": 7936 00:22:06.295 }, 00:22:06.295 { 00:22:06.295 "name": "BaseBdev2", 00:22:06.295 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:06.295 "is_configured": true, 00:22:06.295 "data_offset": 256, 00:22:06.295 "data_size": 7936 00:22:06.295 } 00:22:06.295 ] 00:22:06.295 }' 00:22:06.295 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.295 10:29:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.618 10:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:06.877 [2024-07-15 10:29:31.506469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:06.877 [2024-07-15 10:29:31.506513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.877 [2024-07-15 10:29:31.506529] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcdabf0 00:22:06.877 [2024-07-15 10:29:31.506537] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.877 [2024-07-15 10:29:31.506831] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.877 [2024-07-15 10:29:31.506845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:06.877 [2024-07-15 10:29:31.506891] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:06.877 [2024-07-15 10:29:31.506900] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:06.877 [2024-07-15 10:29:31.506914] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:06.877 [2024-07-15 10:29:31.506928] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.877 [2024-07-15 10:29:31.511275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xde11d0 00:22:06.877 spare 00:22:06.877 [2024-07-15 10:29:31.512317] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.877 10:29:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.812 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.070 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:08.070 "name": "raid_bdev1", 00:22:08.070 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:08.070 "strip_size_kb": 0, 00:22:08.070 "state": "online", 00:22:08.070 "raid_level": "raid1", 00:22:08.070 "superblock": true, 00:22:08.070 "num_base_bdevs": 2, 00:22:08.070 "num_base_bdevs_discovered": 2, 00:22:08.070 "num_base_bdevs_operational": 2, 00:22:08.070 "process": { 00:22:08.070 "type": "rebuild", 00:22:08.070 "target": "spare", 00:22:08.070 "progress": { 00:22:08.070 "blocks": 2816, 00:22:08.070 "percent": 35 00:22:08.071 } 00:22:08.071 }, 00:22:08.071 "base_bdevs_list": [ 00:22:08.071 { 00:22:08.071 "name": "spare", 00:22:08.071 "uuid": "2e2abd58-7e94-58d8-9040-46e3246e3f15", 00:22:08.071 "is_configured": true, 00:22:08.071 "data_offset": 256, 00:22:08.071 "data_size": 7936 00:22:08.071 }, 00:22:08.071 { 00:22:08.071 "name": "BaseBdev2", 00:22:08.071 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:08.071 "is_configured": true, 00:22:08.071 "data_offset": 256, 00:22:08.071 "data_size": 7936 00:22:08.071 } 00:22:08.071 ] 00:22:08.071 }' 00:22:08.071 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:08.071 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.071 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:08.071 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.071 10:29:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:08.329 [2024-07-15 10:29:32.930760] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.329 [2024-07-15 10:29:33.022642] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:08.329 [2024-07-15 10:29:33.022678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.329 [2024-07-15 10:29:33.022688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.329 [2024-07-15 10:29:33.022693] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.329 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.586 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.586 "name": "raid_bdev1", 00:22:08.586 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:08.586 "strip_size_kb": 0, 00:22:08.586 "state": "online", 00:22:08.586 "raid_level": "raid1", 00:22:08.586 "superblock": true, 00:22:08.586 "num_base_bdevs": 2, 00:22:08.586 "num_base_bdevs_discovered": 1, 00:22:08.586 "num_base_bdevs_operational": 1, 00:22:08.586 "base_bdevs_list": [ 00:22:08.586 { 00:22:08.586 "name": null, 00:22:08.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.586 "is_configured": false, 00:22:08.586 "data_offset": 256, 00:22:08.586 "data_size": 7936 00:22:08.586 }, 00:22:08.586 { 00:22:08.586 "name": "BaseBdev2", 00:22:08.586 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:08.586 "is_configured": true, 00:22:08.586 "data_offset": 256, 00:22:08.586 "data_size": 7936 00:22:08.586 } 00:22:08.586 ] 00:22:08.586 }' 00:22:08.586 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.586 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.153 "name": "raid_bdev1", 00:22:09.153 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:09.153 "strip_size_kb": 0, 00:22:09.153 "state": "online", 00:22:09.153 "raid_level": "raid1", 00:22:09.153 "superblock": true, 00:22:09.153 "num_base_bdevs": 2, 00:22:09.153 "num_base_bdevs_discovered": 1, 00:22:09.153 "num_base_bdevs_operational": 1, 00:22:09.153 "base_bdevs_list": [ 00:22:09.153 { 00:22:09.153 "name": null, 00:22:09.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.153 "is_configured": false, 00:22:09.153 "data_offset": 256, 00:22:09.153 "data_size": 7936 00:22:09.153 }, 00:22:09.153 { 00:22:09.153 "name": "BaseBdev2", 00:22:09.153 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:09.153 "is_configured": true, 00:22:09.153 "data_offset": 256, 00:22:09.153 "data_size": 7936 00:22:09.153 } 00:22:09.153 ] 00:22:09.153 }' 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.153 10:29:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:09.412 10:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:09.671 [2024-07-15 10:29:34.261845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:09.671 [2024-07-15 10:29:34.261887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.671 [2024-07-15 10:29:34.261908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc3fcf0 00:22:09.671 [2024-07-15 10:29:34.261917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.671 [2024-07-15 10:29:34.262170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.671 [2024-07-15 10:29:34.262183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:09.671 [2024-07-15 10:29:34.262229] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:09.671 [2024-07-15 10:29:34.262237] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:09.671 [2024-07-15 10:29:34.262244] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:09.671 BaseBdev1 00:22:09.671 10:29:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.606 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.864 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.864 "name": "raid_bdev1", 00:22:10.864 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:10.864 "strip_size_kb": 0, 00:22:10.864 "state": "online", 00:22:10.864 "raid_level": "raid1", 00:22:10.864 "superblock": true, 00:22:10.864 "num_base_bdevs": 2, 00:22:10.864 "num_base_bdevs_discovered": 1, 00:22:10.864 "num_base_bdevs_operational": 1, 00:22:10.864 "base_bdevs_list": [ 00:22:10.864 { 00:22:10.864 "name": null, 00:22:10.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.864 "is_configured": false, 00:22:10.864 "data_offset": 256, 00:22:10.864 "data_size": 7936 00:22:10.864 }, 00:22:10.864 { 00:22:10.864 "name": "BaseBdev2", 00:22:10.864 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:10.864 "is_configured": true, 00:22:10.864 "data_offset": 256, 00:22:10.864 "data_size": 7936 00:22:10.864 } 00:22:10.864 ] 00:22:10.864 }' 00:22:10.864 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.864 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.430 10:29:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.430 "name": "raid_bdev1", 00:22:11.430 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:11.430 "strip_size_kb": 0, 00:22:11.430 "state": "online", 00:22:11.430 "raid_level": "raid1", 00:22:11.430 "superblock": true, 00:22:11.430 "num_base_bdevs": 2, 00:22:11.430 "num_base_bdevs_discovered": 1, 00:22:11.430 "num_base_bdevs_operational": 1, 00:22:11.430 "base_bdevs_list": [ 00:22:11.430 { 00:22:11.430 "name": null, 00:22:11.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.430 "is_configured": false, 00:22:11.430 "data_offset": 256, 00:22:11.430 "data_size": 7936 00:22:11.430 }, 00:22:11.430 { 00:22:11.430 "name": "BaseBdev2", 00:22:11.430 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:11.430 "is_configured": true, 00:22:11.430 "data_offset": 256, 00:22:11.430 "data_size": 7936 00:22:11.430 } 00:22:11.430 ] 00:22:11.430 }' 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:11.430 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:11.688 [2024-07-15 10:29:36.363236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.688 [2024-07-15 10:29:36.363332] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:11.688 [2024-07-15 10:29:36.363342] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:11.688 request: 00:22:11.688 { 00:22:11.688 "base_bdev": "BaseBdev1", 00:22:11.688 "raid_bdev": "raid_bdev1", 00:22:11.688 "method": "bdev_raid_add_base_bdev", 00:22:11.688 "req_id": 1 00:22:11.688 } 00:22:11.688 Got JSON-RPC error response 00:22:11.688 response: 00:22:11.688 { 00:22:11.688 "code": -22, 00:22:11.688 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:11.688 } 00:22:11.688 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:22:11.688 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.688 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.688 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.688 10:29:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.623 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.881 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.881 "name": "raid_bdev1", 00:22:12.881 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:12.881 "strip_size_kb": 0, 00:22:12.881 "state": "online", 00:22:12.881 "raid_level": "raid1", 00:22:12.881 "superblock": true, 00:22:12.881 "num_base_bdevs": 2, 00:22:12.881 "num_base_bdevs_discovered": 1, 00:22:12.881 "num_base_bdevs_operational": 1, 00:22:12.881 "base_bdevs_list": [ 00:22:12.881 { 00:22:12.881 "name": null, 00:22:12.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.881 "is_configured": false, 00:22:12.881 "data_offset": 256, 00:22:12.881 "data_size": 7936 00:22:12.881 }, 00:22:12.881 { 00:22:12.881 "name": "BaseBdev2", 00:22:12.881 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:12.881 "is_configured": true, 00:22:12.881 "data_offset": 256, 00:22:12.881 "data_size": 7936 00:22:12.881 } 00:22:12.881 ] 00:22:12.881 }' 00:22:12.881 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.881 10:29:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:13.447 "name": "raid_bdev1", 00:22:13.447 "uuid": "f2343a34-c695-4689-afc3-a1b08d5da8fa", 00:22:13.447 "strip_size_kb": 0, 00:22:13.447 "state": "online", 00:22:13.447 "raid_level": "raid1", 00:22:13.447 "superblock": true, 00:22:13.447 "num_base_bdevs": 2, 00:22:13.447 "num_base_bdevs_discovered": 1, 00:22:13.447 "num_base_bdevs_operational": 1, 00:22:13.447 "base_bdevs_list": [ 00:22:13.447 { 00:22:13.447 "name": null, 00:22:13.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.447 "is_configured": false, 00:22:13.447 "data_offset": 256, 00:22:13.447 "data_size": 7936 00:22:13.447 }, 00:22:13.447 { 00:22:13.447 "name": "BaseBdev2", 00:22:13.447 "uuid": "ebb54a81-753c-5a49-b3a6-df7dd4bf85b1", 00:22:13.447 "is_configured": true, 00:22:13.447 "data_offset": 256, 00:22:13.447 "data_size": 7936 00:22:13.447 } 00:22:13.447 ] 00:22:13.447 }' 00:22:13.447 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 1886413 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 1886413 ']' 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 1886413 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886413 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886413' 00:22:13.706 killing process with pid 1886413 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 1886413 00:22:13.706 Received shutdown signal, test time was about 60.000000 seconds 00:22:13.706 00:22:13.706 Latency(us) 00:22:13.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.706 =================================================================================================================== 00:22:13.706 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:13.706 [2024-07-15 10:29:38.348345] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:13.706 [2024-07-15 10:29:38.348417] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:13.706 [2024-07-15 10:29:38.348448] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:13.706 [2024-07-15 10:29:38.348456] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc40770 name raid_bdev1, state offline 00:22:13.706 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 1886413 00:22:13.706 [2024-07-15 10:29:38.372073] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.965 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:22:13.965 00:22:13.965 real 0m25.639s 00:22:13.965 user 0m38.614s 00:22:13.965 sys 0m4.046s 00:22:13.965 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:13.965 10:29:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:13.965 ************************************ 00:22:13.965 END TEST raid_rebuild_test_sb_4k 00:22:13.965 ************************************ 00:22:13.965 10:29:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:13.965 10:29:38 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:22:13.965 10:29:38 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:13.965 10:29:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:13.965 10:29:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.965 10:29:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:13.965 ************************************ 00:22:13.965 START TEST raid_state_function_test_sb_md_separate 00:22:13.965 ************************************ 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=1891119 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1891119' 00:22:13.965 Process raid pid: 1891119 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 1891119 /var/tmp/spdk-raid.sock 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 1891119 ']' 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:13.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.965 10:29:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.965 [2024-07-15 10:29:38.680595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:13.965 [2024-07-15 10:29:38.680640] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.965 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:13.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.965 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:13.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.965 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:13.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.965 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:13.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.965 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:13.965 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.965 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:13.966 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:13.966 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:14.224 [2024-07-15 10:29:38.771156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.224 [2024-07-15 10:29:38.845850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.224 [2024-07-15 10:29:38.897923] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.224 [2024-07-15 10:29:38.897946] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.791 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.791 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:14.791 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:15.050 [2024-07-15 10:29:39.621271] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:15.050 [2024-07-15 10:29:39.621301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:15.051 [2024-07-15 10:29:39.621308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.051 [2024-07-15 10:29:39.621316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.051 "name": "Existed_Raid", 00:22:15.051 "uuid": "4eae2478-944e-423b-96d8-23421d03832d", 00:22:15.051 "strip_size_kb": 0, 00:22:15.051 "state": "configuring", 00:22:15.051 "raid_level": "raid1", 00:22:15.051 "superblock": true, 00:22:15.051 "num_base_bdevs": 2, 00:22:15.051 "num_base_bdevs_discovered": 0, 00:22:15.051 "num_base_bdevs_operational": 2, 00:22:15.051 "base_bdevs_list": [ 00:22:15.051 { 00:22:15.051 "name": "BaseBdev1", 00:22:15.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.051 "is_configured": false, 00:22:15.051 "data_offset": 0, 00:22:15.051 "data_size": 0 00:22:15.051 }, 00:22:15.051 { 00:22:15.051 "name": "BaseBdev2", 00:22:15.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.051 "is_configured": false, 00:22:15.051 "data_offset": 0, 00:22:15.051 "data_size": 0 00:22:15.051 } 00:22:15.051 ] 00:22:15.051 }' 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.051 10:29:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.617 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.876 [2024-07-15 10:29:40.471359] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.876 [2024-07-15 10:29:40.471379] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d91f20 name Existed_Raid, state configuring 00:22:15.876 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:15.876 [2024-07-15 10:29:40.639804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:15.876 [2024-07-15 10:29:40.639826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:15.876 [2024-07-15 10:29:40.639832] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:15.876 [2024-07-15 10:29:40.639839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:15.876 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:16.134 [2024-07-15 10:29:40.805326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:16.134 BaseBdev1 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:16.134 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.393 10:29:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:16.393 [ 00:22:16.393 { 00:22:16.393 "name": "BaseBdev1", 00:22:16.393 "aliases": [ 00:22:16.393 "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c" 00:22:16.393 ], 00:22:16.393 "product_name": "Malloc disk", 00:22:16.393 "block_size": 4096, 00:22:16.393 "num_blocks": 8192, 00:22:16.393 "uuid": "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c", 00:22:16.393 "md_size": 32, 00:22:16.393 "md_interleave": false, 00:22:16.393 "dif_type": 0, 00:22:16.393 "assigned_rate_limits": { 00:22:16.393 "rw_ios_per_sec": 0, 00:22:16.393 "rw_mbytes_per_sec": 0, 00:22:16.393 "r_mbytes_per_sec": 0, 00:22:16.393 "w_mbytes_per_sec": 0 00:22:16.393 }, 00:22:16.393 "claimed": true, 00:22:16.393 "claim_type": "exclusive_write", 00:22:16.393 "zoned": false, 00:22:16.393 "supported_io_types": { 00:22:16.393 "read": true, 00:22:16.393 "write": true, 00:22:16.393 "unmap": true, 00:22:16.393 "flush": true, 00:22:16.393 "reset": true, 00:22:16.393 "nvme_admin": false, 00:22:16.393 "nvme_io": false, 00:22:16.393 "nvme_io_md": false, 00:22:16.393 "write_zeroes": true, 00:22:16.393 "zcopy": true, 00:22:16.393 "get_zone_info": false, 00:22:16.393 "zone_management": false, 00:22:16.393 "zone_append": false, 00:22:16.393 "compare": false, 00:22:16.393 "compare_and_write": false, 00:22:16.393 "abort": true, 00:22:16.393 "seek_hole": false, 00:22:16.393 "seek_data": false, 00:22:16.393 "copy": true, 00:22:16.393 "nvme_iov_md": false 00:22:16.393 }, 00:22:16.393 "memory_domains": [ 00:22:16.393 { 00:22:16.393 "dma_device_id": "system", 00:22:16.393 "dma_device_type": 1 00:22:16.393 }, 00:22:16.393 { 00:22:16.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.393 "dma_device_type": 2 00:22:16.393 } 00:22:16.393 ], 00:22:16.393 "driver_specific": {} 00:22:16.393 } 00:22:16.393 ] 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.393 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.652 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.652 "name": "Existed_Raid", 00:22:16.652 "uuid": "a57dea5e-06cb-465a-a94f-81bac02b89a7", 00:22:16.652 "strip_size_kb": 0, 00:22:16.652 "state": "configuring", 00:22:16.652 "raid_level": "raid1", 00:22:16.652 "superblock": true, 00:22:16.652 "num_base_bdevs": 2, 00:22:16.652 "num_base_bdevs_discovered": 1, 00:22:16.652 "num_base_bdevs_operational": 2, 00:22:16.652 "base_bdevs_list": [ 00:22:16.652 { 00:22:16.652 "name": "BaseBdev1", 00:22:16.652 "uuid": "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c", 00:22:16.652 "is_configured": true, 00:22:16.652 "data_offset": 256, 00:22:16.652 "data_size": 7936 00:22:16.652 }, 00:22:16.652 { 00:22:16.652 "name": "BaseBdev2", 00:22:16.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.652 "is_configured": false, 00:22:16.652 "data_offset": 0, 00:22:16.652 "data_size": 0 00:22:16.652 } 00:22:16.652 ] 00:22:16.652 }' 00:22:16.652 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.652 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.218 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:17.218 [2024-07-15 10:29:41.968341] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.218 [2024-07-15 10:29:41.968368] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d91810 name Existed_Raid, state configuring 00:22:17.218 10:29:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:22:17.476 [2024-07-15 10:29:42.136794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.476 [2024-07-15 10:29:42.137866] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.476 [2024-07-15 10:29:42.137891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.476 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.735 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.735 "name": "Existed_Raid", 00:22:17.735 "uuid": "9e91feb0-a81f-410f-8c40-c3a19a4de1ca", 00:22:17.735 "strip_size_kb": 0, 00:22:17.735 "state": "configuring", 00:22:17.735 "raid_level": "raid1", 00:22:17.735 "superblock": true, 00:22:17.735 "num_base_bdevs": 2, 00:22:17.735 "num_base_bdevs_discovered": 1, 00:22:17.735 "num_base_bdevs_operational": 2, 00:22:17.735 "base_bdevs_list": [ 00:22:17.735 { 00:22:17.735 "name": "BaseBdev1", 00:22:17.735 "uuid": "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c", 00:22:17.735 "is_configured": true, 00:22:17.735 "data_offset": 256, 00:22:17.735 "data_size": 7936 00:22:17.735 }, 00:22:17.735 { 00:22:17.735 "name": "BaseBdev2", 00:22:17.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.735 "is_configured": false, 00:22:17.735 "data_offset": 0, 00:22:17.735 "data_size": 0 00:22:17.735 } 00:22:17.735 ] 00:22:17.735 }' 00:22:17.735 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.735 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:18.306 [2024-07-15 10:29:42.950194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.306 [2024-07-15 10:29:42.950291] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d90f50 00:22:18.306 [2024-07-15 10:29:42.950299] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:18.306 [2024-07-15 10:29:42.950340] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d90990 00:22:18.306 [2024-07-15 10:29:42.950400] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d90f50 00:22:18.306 [2024-07-15 10:29:42.950406] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1d90f50 00:22:18.306 [2024-07-15 10:29:42.950446] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.306 BaseBdev2 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:18.306 10:29:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:18.564 [ 00:22:18.564 { 00:22:18.564 "name": "BaseBdev2", 00:22:18.564 "aliases": [ 00:22:18.564 "a52d9977-8b9b-42b9-a7c0-b40d9276aab8" 00:22:18.564 ], 00:22:18.564 "product_name": "Malloc disk", 00:22:18.564 "block_size": 4096, 00:22:18.564 "num_blocks": 8192, 00:22:18.564 "uuid": "a52d9977-8b9b-42b9-a7c0-b40d9276aab8", 00:22:18.564 "md_size": 32, 00:22:18.564 "md_interleave": false, 00:22:18.564 "dif_type": 0, 00:22:18.564 "assigned_rate_limits": { 00:22:18.564 "rw_ios_per_sec": 0, 00:22:18.564 "rw_mbytes_per_sec": 0, 00:22:18.564 "r_mbytes_per_sec": 0, 00:22:18.564 "w_mbytes_per_sec": 0 00:22:18.564 }, 00:22:18.564 "claimed": true, 00:22:18.564 "claim_type": "exclusive_write", 00:22:18.564 "zoned": false, 00:22:18.564 "supported_io_types": { 00:22:18.564 "read": true, 00:22:18.564 "write": true, 00:22:18.564 "unmap": true, 00:22:18.564 "flush": true, 00:22:18.564 "reset": true, 00:22:18.564 "nvme_admin": false, 00:22:18.564 "nvme_io": false, 00:22:18.564 "nvme_io_md": false, 00:22:18.564 "write_zeroes": true, 00:22:18.564 "zcopy": true, 00:22:18.564 "get_zone_info": false, 00:22:18.564 "zone_management": false, 00:22:18.564 "zone_append": false, 00:22:18.564 "compare": false, 00:22:18.564 "compare_and_write": false, 00:22:18.564 "abort": true, 00:22:18.564 "seek_hole": false, 00:22:18.564 "seek_data": false, 00:22:18.564 "copy": true, 00:22:18.564 "nvme_iov_md": false 00:22:18.564 }, 00:22:18.564 "memory_domains": [ 00:22:18.564 { 00:22:18.564 "dma_device_id": "system", 00:22:18.564 "dma_device_type": 1 00:22:18.564 }, 00:22:18.564 { 00:22:18.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.564 "dma_device_type": 2 00:22:18.564 } 00:22:18.564 ], 00:22:18.564 "driver_specific": {} 00:22:18.564 } 00:22:18.564 ] 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:18.564 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:18.565 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.565 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.823 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.823 "name": "Existed_Raid", 00:22:18.823 "uuid": "9e91feb0-a81f-410f-8c40-c3a19a4de1ca", 00:22:18.823 "strip_size_kb": 0, 00:22:18.823 "state": "online", 00:22:18.823 "raid_level": "raid1", 00:22:18.823 "superblock": true, 00:22:18.823 "num_base_bdevs": 2, 00:22:18.823 "num_base_bdevs_discovered": 2, 00:22:18.823 "num_base_bdevs_operational": 2, 00:22:18.823 "base_bdevs_list": [ 00:22:18.823 { 00:22:18.823 "name": "BaseBdev1", 00:22:18.823 "uuid": "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c", 00:22:18.823 "is_configured": true, 00:22:18.823 "data_offset": 256, 00:22:18.823 "data_size": 7936 00:22:18.823 }, 00:22:18.823 { 00:22:18.823 "name": "BaseBdev2", 00:22:18.823 "uuid": "a52d9977-8b9b-42b9-a7c0-b40d9276aab8", 00:22:18.823 "is_configured": true, 00:22:18.823 "data_offset": 256, 00:22:18.823 "data_size": 7936 00:22:18.823 } 00:22:18.823 ] 00:22:18.823 }' 00:22:18.823 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.823 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:19.444 10:29:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:19.444 [2024-07-15 10:29:44.137443] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.444 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:19.444 "name": "Existed_Raid", 00:22:19.444 "aliases": [ 00:22:19.444 "9e91feb0-a81f-410f-8c40-c3a19a4de1ca" 00:22:19.444 ], 00:22:19.444 "product_name": "Raid Volume", 00:22:19.444 "block_size": 4096, 00:22:19.444 "num_blocks": 7936, 00:22:19.444 "uuid": "9e91feb0-a81f-410f-8c40-c3a19a4de1ca", 00:22:19.444 "md_size": 32, 00:22:19.444 "md_interleave": false, 00:22:19.444 "dif_type": 0, 00:22:19.444 "assigned_rate_limits": { 00:22:19.444 "rw_ios_per_sec": 0, 00:22:19.444 "rw_mbytes_per_sec": 0, 00:22:19.444 "r_mbytes_per_sec": 0, 00:22:19.444 "w_mbytes_per_sec": 0 00:22:19.444 }, 00:22:19.444 "claimed": false, 00:22:19.444 "zoned": false, 00:22:19.444 "supported_io_types": { 00:22:19.444 "read": true, 00:22:19.444 "write": true, 00:22:19.444 "unmap": false, 00:22:19.444 "flush": false, 00:22:19.444 "reset": true, 00:22:19.444 "nvme_admin": false, 00:22:19.444 "nvme_io": false, 00:22:19.444 "nvme_io_md": false, 00:22:19.444 "write_zeroes": true, 00:22:19.444 "zcopy": false, 00:22:19.444 "get_zone_info": false, 00:22:19.444 "zone_management": false, 00:22:19.444 "zone_append": false, 00:22:19.444 "compare": false, 00:22:19.444 "compare_and_write": false, 00:22:19.444 "abort": false, 00:22:19.444 "seek_hole": false, 00:22:19.444 "seek_data": false, 00:22:19.444 "copy": false, 00:22:19.444 "nvme_iov_md": false 00:22:19.444 }, 00:22:19.444 "memory_domains": [ 00:22:19.444 { 00:22:19.444 "dma_device_id": "system", 00:22:19.444 "dma_device_type": 1 00:22:19.444 }, 00:22:19.444 { 00:22:19.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.444 "dma_device_type": 2 00:22:19.444 }, 00:22:19.444 { 00:22:19.444 "dma_device_id": "system", 00:22:19.444 "dma_device_type": 1 00:22:19.444 }, 00:22:19.444 { 00:22:19.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.444 "dma_device_type": 2 00:22:19.444 } 00:22:19.444 ], 00:22:19.444 "driver_specific": { 00:22:19.444 "raid": { 00:22:19.444 "uuid": "9e91feb0-a81f-410f-8c40-c3a19a4de1ca", 00:22:19.444 "strip_size_kb": 0, 00:22:19.444 "state": "online", 00:22:19.444 "raid_level": "raid1", 00:22:19.444 "superblock": true, 00:22:19.444 "num_base_bdevs": 2, 00:22:19.444 "num_base_bdevs_discovered": 2, 00:22:19.444 "num_base_bdevs_operational": 2, 00:22:19.444 "base_bdevs_list": [ 00:22:19.444 { 00:22:19.444 "name": "BaseBdev1", 00:22:19.444 "uuid": "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c", 00:22:19.444 "is_configured": true, 00:22:19.444 "data_offset": 256, 00:22:19.444 "data_size": 7936 00:22:19.444 }, 00:22:19.444 { 00:22:19.444 "name": "BaseBdev2", 00:22:19.444 "uuid": "a52d9977-8b9b-42b9-a7c0-b40d9276aab8", 00:22:19.444 "is_configured": true, 00:22:19.444 "data_offset": 256, 00:22:19.444 "data_size": 7936 00:22:19.444 } 00:22:19.444 ] 00:22:19.444 } 00:22:19.444 } 00:22:19.444 }' 00:22:19.444 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.444 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:19.444 BaseBdev2' 00:22:19.444 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.444 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:19.444 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.702 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.702 "name": "BaseBdev1", 00:22:19.702 "aliases": [ 00:22:19.702 "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c" 00:22:19.702 ], 00:22:19.702 "product_name": "Malloc disk", 00:22:19.702 "block_size": 4096, 00:22:19.702 "num_blocks": 8192, 00:22:19.702 "uuid": "2dbb9bab-1f2d-439d-a2fa-44c6ff2a272c", 00:22:19.702 "md_size": 32, 00:22:19.702 "md_interleave": false, 00:22:19.702 "dif_type": 0, 00:22:19.702 "assigned_rate_limits": { 00:22:19.702 "rw_ios_per_sec": 0, 00:22:19.702 "rw_mbytes_per_sec": 0, 00:22:19.702 "r_mbytes_per_sec": 0, 00:22:19.702 "w_mbytes_per_sec": 0 00:22:19.702 }, 00:22:19.702 "claimed": true, 00:22:19.702 "claim_type": "exclusive_write", 00:22:19.702 "zoned": false, 00:22:19.702 "supported_io_types": { 00:22:19.702 "read": true, 00:22:19.702 "write": true, 00:22:19.702 "unmap": true, 00:22:19.702 "flush": true, 00:22:19.702 "reset": true, 00:22:19.702 "nvme_admin": false, 00:22:19.702 "nvme_io": false, 00:22:19.702 "nvme_io_md": false, 00:22:19.702 "write_zeroes": true, 00:22:19.702 "zcopy": true, 00:22:19.702 "get_zone_info": false, 00:22:19.702 "zone_management": false, 00:22:19.702 "zone_append": false, 00:22:19.702 "compare": false, 00:22:19.702 "compare_and_write": false, 00:22:19.702 "abort": true, 00:22:19.702 "seek_hole": false, 00:22:19.702 "seek_data": false, 00:22:19.702 "copy": true, 00:22:19.702 "nvme_iov_md": false 00:22:19.702 }, 00:22:19.702 "memory_domains": [ 00:22:19.702 { 00:22:19.702 "dma_device_id": "system", 00:22:19.702 "dma_device_type": 1 00:22:19.702 }, 00:22:19.702 { 00:22:19.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.702 "dma_device_type": 2 00:22:19.702 } 00:22:19.702 ], 00:22:19.702 "driver_specific": {} 00:22:19.702 }' 00:22:19.702 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.702 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.702 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:19.702 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.702 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.961 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:20.219 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.220 "name": "BaseBdev2", 00:22:20.220 "aliases": [ 00:22:20.220 "a52d9977-8b9b-42b9-a7c0-b40d9276aab8" 00:22:20.220 ], 00:22:20.220 "product_name": "Malloc disk", 00:22:20.220 "block_size": 4096, 00:22:20.220 "num_blocks": 8192, 00:22:20.220 "uuid": "a52d9977-8b9b-42b9-a7c0-b40d9276aab8", 00:22:20.220 "md_size": 32, 00:22:20.220 "md_interleave": false, 00:22:20.220 "dif_type": 0, 00:22:20.220 "assigned_rate_limits": { 00:22:20.220 "rw_ios_per_sec": 0, 00:22:20.220 "rw_mbytes_per_sec": 0, 00:22:20.220 "r_mbytes_per_sec": 0, 00:22:20.220 "w_mbytes_per_sec": 0 00:22:20.220 }, 00:22:20.220 "claimed": true, 00:22:20.220 "claim_type": "exclusive_write", 00:22:20.220 "zoned": false, 00:22:20.220 "supported_io_types": { 00:22:20.220 "read": true, 00:22:20.220 "write": true, 00:22:20.220 "unmap": true, 00:22:20.220 "flush": true, 00:22:20.220 "reset": true, 00:22:20.220 "nvme_admin": false, 00:22:20.220 "nvme_io": false, 00:22:20.220 "nvme_io_md": false, 00:22:20.220 "write_zeroes": true, 00:22:20.220 "zcopy": true, 00:22:20.220 "get_zone_info": false, 00:22:20.220 "zone_management": false, 00:22:20.220 "zone_append": false, 00:22:20.220 "compare": false, 00:22:20.220 "compare_and_write": false, 00:22:20.220 "abort": true, 00:22:20.220 "seek_hole": false, 00:22:20.220 "seek_data": false, 00:22:20.220 "copy": true, 00:22:20.220 "nvme_iov_md": false 00:22:20.220 }, 00:22:20.220 "memory_domains": [ 00:22:20.220 { 00:22:20.220 "dma_device_id": "system", 00:22:20.220 "dma_device_type": 1 00:22:20.220 }, 00:22:20.220 { 00:22:20.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.220 "dma_device_type": 2 00:22:20.220 } 00:22:20.220 ], 00:22:20.220 "driver_specific": {} 00:22:20.220 }' 00:22:20.220 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.220 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.220 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:20.220 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.220 10:29:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:20.478 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:20.737 [2024-07-15 10:29:45.324360] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.737 "name": "Existed_Raid", 00:22:20.737 "uuid": "9e91feb0-a81f-410f-8c40-c3a19a4de1ca", 00:22:20.737 "strip_size_kb": 0, 00:22:20.737 "state": "online", 00:22:20.737 "raid_level": "raid1", 00:22:20.737 "superblock": true, 00:22:20.737 "num_base_bdevs": 2, 00:22:20.737 "num_base_bdevs_discovered": 1, 00:22:20.737 "num_base_bdevs_operational": 1, 00:22:20.737 "base_bdevs_list": [ 00:22:20.737 { 00:22:20.737 "name": null, 00:22:20.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.737 "is_configured": false, 00:22:20.737 "data_offset": 256, 00:22:20.737 "data_size": 7936 00:22:20.737 }, 00:22:20.737 { 00:22:20.737 "name": "BaseBdev2", 00:22:20.737 "uuid": "a52d9977-8b9b-42b9-a7c0-b40d9276aab8", 00:22:20.737 "is_configured": true, 00:22:20.737 "data_offset": 256, 00:22:20.737 "data_size": 7936 00:22:20.737 } 00:22:20.737 ] 00:22:20.737 }' 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.737 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.303 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:21.303 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.303 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.303 10:29:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:21.562 [2024-07-15 10:29:46.292658] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.562 [2024-07-15 10:29:46.292718] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:21.562 [2024-07-15 10:29:46.302997] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:21.562 [2024-07-15 10:29:46.303024] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:21.562 [2024-07-15 10:29:46.303031] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d90f50 name Existed_Raid, state offline 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.562 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 1891119 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 1891119 ']' 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 1891119 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1891119 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1891119' 00:22:21.822 killing process with pid 1891119 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 1891119 00:22:21.822 [2024-07-15 10:29:46.540565] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:21.822 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 1891119 00:22:21.822 [2024-07-15 10:29:46.541371] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:22.081 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:22:22.081 00:22:22.081 real 0m8.087s 00:22:22.081 user 0m14.179s 00:22:22.081 sys 0m1.652s 00:22:22.081 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.081 10:29:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.081 ************************************ 00:22:22.081 END TEST raid_state_function_test_sb_md_separate 00:22:22.081 ************************************ 00:22:22.081 10:29:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:22.081 10:29:46 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:22.081 10:29:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:22.081 10:29:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.081 10:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:22.081 ************************************ 00:22:22.081 START TEST raid_superblock_test_md_separate 00:22:22.081 ************************************ 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=1892690 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 1892690 /var/tmp/spdk-raid.sock 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 1892690 ']' 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:22.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:22.081 10:29:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.081 [2024-07-15 10:29:46.841748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:22.081 [2024-07-15 10:29:46.841790] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892690 ] 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:22.340 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:22.340 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:22.340 [2024-07-15 10:29:46.932635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.340 [2024-07-15 10:29:47.007091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.340 [2024-07-15 10:29:47.059912] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.340 [2024-07-15 10:29:47.059939] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:22.906 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:23.164 malloc1 00:22:23.164 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:23.422 [2024-07-15 10:29:47.973039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:23.422 [2024-07-15 10:29:47.973075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.422 [2024-07-15 10:29:47.973089] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20b4cc0 00:22:23.422 [2024-07-15 10:29:47.973097] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.422 [2024-07-15 10:29:47.974122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.422 [2024-07-15 10:29:47.974150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:23.422 pt1 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:23.422 10:29:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:23.422 malloc2 00:22:23.422 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:23.680 [2024-07-15 10:29:48.306238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:23.680 [2024-07-15 10:29:48.306271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.680 [2024-07-15 10:29:48.306284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21c7b80 00:22:23.680 [2024-07-15 10:29:48.306292] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.680 [2024-07-15 10:29:48.307217] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.680 [2024-07-15 10:29:48.307236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:23.680 pt2 00:22:23.680 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:23.680 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:23.680 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:22:23.938 [2024-07-15 10:29:48.478693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:23.938 [2024-07-15 10:29:48.479534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:23.938 [2024-07-15 10:29:48.479631] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20b53c0 00:22:23.938 [2024-07-15 10:29:48.479640] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:23.938 [2024-07-15 10:29:48.479686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21c82b0 00:22:23.938 [2024-07-15 10:29:48.479760] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20b53c0 00:22:23.938 [2024-07-15 10:29:48.479766] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20b53c0 00:22:23.938 [2024-07-15 10:29:48.479809] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.938 "name": "raid_bdev1", 00:22:23.938 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:23.938 "strip_size_kb": 0, 00:22:23.938 "state": "online", 00:22:23.938 "raid_level": "raid1", 00:22:23.938 "superblock": true, 00:22:23.938 "num_base_bdevs": 2, 00:22:23.938 "num_base_bdevs_discovered": 2, 00:22:23.938 "num_base_bdevs_operational": 2, 00:22:23.938 "base_bdevs_list": [ 00:22:23.938 { 00:22:23.938 "name": "pt1", 00:22:23.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:23.938 "is_configured": true, 00:22:23.938 "data_offset": 256, 00:22:23.938 "data_size": 7936 00:22:23.938 }, 00:22:23.938 { 00:22:23.938 "name": "pt2", 00:22:23.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.938 "is_configured": true, 00:22:23.938 "data_offset": 256, 00:22:23.938 "data_size": 7936 00:22:23.938 } 00:22:23.938 ] 00:22:23.938 }' 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.938 10:29:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:24.503 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:24.760 [2024-07-15 10:29:49.304966] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.760 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:24.761 "name": "raid_bdev1", 00:22:24.761 "aliases": [ 00:22:24.761 "ed2ac30a-a651-43c5-974b-5d8a9ebe1310" 00:22:24.761 ], 00:22:24.761 "product_name": "Raid Volume", 00:22:24.761 "block_size": 4096, 00:22:24.761 "num_blocks": 7936, 00:22:24.761 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:24.761 "md_size": 32, 00:22:24.761 "md_interleave": false, 00:22:24.761 "dif_type": 0, 00:22:24.761 "assigned_rate_limits": { 00:22:24.761 "rw_ios_per_sec": 0, 00:22:24.761 "rw_mbytes_per_sec": 0, 00:22:24.761 "r_mbytes_per_sec": 0, 00:22:24.761 "w_mbytes_per_sec": 0 00:22:24.761 }, 00:22:24.761 "claimed": false, 00:22:24.761 "zoned": false, 00:22:24.761 "supported_io_types": { 00:22:24.761 "read": true, 00:22:24.761 "write": true, 00:22:24.761 "unmap": false, 00:22:24.761 "flush": false, 00:22:24.761 "reset": true, 00:22:24.761 "nvme_admin": false, 00:22:24.761 "nvme_io": false, 00:22:24.761 "nvme_io_md": false, 00:22:24.761 "write_zeroes": true, 00:22:24.761 "zcopy": false, 00:22:24.761 "get_zone_info": false, 00:22:24.761 "zone_management": false, 00:22:24.761 "zone_append": false, 00:22:24.761 "compare": false, 00:22:24.761 "compare_and_write": false, 00:22:24.761 "abort": false, 00:22:24.761 "seek_hole": false, 00:22:24.761 "seek_data": false, 00:22:24.761 "copy": false, 00:22:24.761 "nvme_iov_md": false 00:22:24.761 }, 00:22:24.761 "memory_domains": [ 00:22:24.761 { 00:22:24.761 "dma_device_id": "system", 00:22:24.761 "dma_device_type": 1 00:22:24.761 }, 00:22:24.761 { 00:22:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.761 "dma_device_type": 2 00:22:24.761 }, 00:22:24.761 { 00:22:24.761 "dma_device_id": "system", 00:22:24.761 "dma_device_type": 1 00:22:24.761 }, 00:22:24.761 { 00:22:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.761 "dma_device_type": 2 00:22:24.761 } 00:22:24.761 ], 00:22:24.761 "driver_specific": { 00:22:24.761 "raid": { 00:22:24.761 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:24.761 "strip_size_kb": 0, 00:22:24.761 "state": "online", 00:22:24.761 "raid_level": "raid1", 00:22:24.761 "superblock": true, 00:22:24.761 "num_base_bdevs": 2, 00:22:24.761 "num_base_bdevs_discovered": 2, 00:22:24.761 "num_base_bdevs_operational": 2, 00:22:24.761 "base_bdevs_list": [ 00:22:24.761 { 00:22:24.761 "name": "pt1", 00:22:24.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:24.761 "is_configured": true, 00:22:24.761 "data_offset": 256, 00:22:24.761 "data_size": 7936 00:22:24.761 }, 00:22:24.761 { 00:22:24.761 "name": "pt2", 00:22:24.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.761 "is_configured": true, 00:22:24.761 "data_offset": 256, 00:22:24.761 "data_size": 7936 00:22:24.761 } 00:22:24.761 ] 00:22:24.761 } 00:22:24.761 } 00:22:24.761 }' 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:24.761 pt2' 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:24.761 "name": "pt1", 00:22:24.761 "aliases": [ 00:22:24.761 "00000000-0000-0000-0000-000000000001" 00:22:24.761 ], 00:22:24.761 "product_name": "passthru", 00:22:24.761 "block_size": 4096, 00:22:24.761 "num_blocks": 8192, 00:22:24.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:24.761 "md_size": 32, 00:22:24.761 "md_interleave": false, 00:22:24.761 "dif_type": 0, 00:22:24.761 "assigned_rate_limits": { 00:22:24.761 "rw_ios_per_sec": 0, 00:22:24.761 "rw_mbytes_per_sec": 0, 00:22:24.761 "r_mbytes_per_sec": 0, 00:22:24.761 "w_mbytes_per_sec": 0 00:22:24.761 }, 00:22:24.761 "claimed": true, 00:22:24.761 "claim_type": "exclusive_write", 00:22:24.761 "zoned": false, 00:22:24.761 "supported_io_types": { 00:22:24.761 "read": true, 00:22:24.761 "write": true, 00:22:24.761 "unmap": true, 00:22:24.761 "flush": true, 00:22:24.761 "reset": true, 00:22:24.761 "nvme_admin": false, 00:22:24.761 "nvme_io": false, 00:22:24.761 "nvme_io_md": false, 00:22:24.761 "write_zeroes": true, 00:22:24.761 "zcopy": true, 00:22:24.761 "get_zone_info": false, 00:22:24.761 "zone_management": false, 00:22:24.761 "zone_append": false, 00:22:24.761 "compare": false, 00:22:24.761 "compare_and_write": false, 00:22:24.761 "abort": true, 00:22:24.761 "seek_hole": false, 00:22:24.761 "seek_data": false, 00:22:24.761 "copy": true, 00:22:24.761 "nvme_iov_md": false 00:22:24.761 }, 00:22:24.761 "memory_domains": [ 00:22:24.761 { 00:22:24.761 "dma_device_id": "system", 00:22:24.761 "dma_device_type": 1 00:22:24.761 }, 00:22:24.761 { 00:22:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.761 "dma_device_type": 2 00:22:24.761 } 00:22:24.761 ], 00:22:24.761 "driver_specific": { 00:22:24.761 "passthru": { 00:22:24.761 "name": "pt1", 00:22:24.761 "base_bdev_name": "malloc1" 00:22:24.761 } 00:22:24.761 } 00:22:24.761 }' 00:22:24.761 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:25.018 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.275 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.275 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:25.275 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:25.275 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:25.275 10:29:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:25.275 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:25.275 "name": "pt2", 00:22:25.275 "aliases": [ 00:22:25.275 "00000000-0000-0000-0000-000000000002" 00:22:25.275 ], 00:22:25.275 "product_name": "passthru", 00:22:25.275 "block_size": 4096, 00:22:25.275 "num_blocks": 8192, 00:22:25.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:25.275 "md_size": 32, 00:22:25.275 "md_interleave": false, 00:22:25.275 "dif_type": 0, 00:22:25.275 "assigned_rate_limits": { 00:22:25.275 "rw_ios_per_sec": 0, 00:22:25.275 "rw_mbytes_per_sec": 0, 00:22:25.275 "r_mbytes_per_sec": 0, 00:22:25.275 "w_mbytes_per_sec": 0 00:22:25.275 }, 00:22:25.275 "claimed": true, 00:22:25.275 "claim_type": "exclusive_write", 00:22:25.275 "zoned": false, 00:22:25.275 "supported_io_types": { 00:22:25.275 "read": true, 00:22:25.275 "write": true, 00:22:25.275 "unmap": true, 00:22:25.275 "flush": true, 00:22:25.275 "reset": true, 00:22:25.275 "nvme_admin": false, 00:22:25.275 "nvme_io": false, 00:22:25.275 "nvme_io_md": false, 00:22:25.275 "write_zeroes": true, 00:22:25.275 "zcopy": true, 00:22:25.275 "get_zone_info": false, 00:22:25.275 "zone_management": false, 00:22:25.275 "zone_append": false, 00:22:25.275 "compare": false, 00:22:25.275 "compare_and_write": false, 00:22:25.275 "abort": true, 00:22:25.275 "seek_hole": false, 00:22:25.275 "seek_data": false, 00:22:25.275 "copy": true, 00:22:25.275 "nvme_iov_md": false 00:22:25.275 }, 00:22:25.275 "memory_domains": [ 00:22:25.275 { 00:22:25.275 "dma_device_id": "system", 00:22:25.275 "dma_device_type": 1 00:22:25.275 }, 00:22:25.275 { 00:22:25.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.275 "dma_device_type": 2 00:22:25.275 } 00:22:25.275 ], 00:22:25.275 "driver_specific": { 00:22:25.275 "passthru": { 00:22:25.275 "name": "pt2", 00:22:25.275 "base_bdev_name": "malloc2" 00:22:25.275 } 00:22:25.275 } 00:22:25.275 }' 00:22:25.275 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:25.533 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:25.791 [2024-07-15 10:29:50.548172] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ed2ac30a-a651-43c5-974b-5d8a9ebe1310 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z ed2ac30a-a651-43c5-974b-5d8a9ebe1310 ']' 00:22:25.791 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:26.049 [2024-07-15 10:29:50.716440] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.049 [2024-07-15 10:29:50.716455] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.049 [2024-07-15 10:29:50.716493] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.049 [2024-07-15 10:29:50.716528] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.049 [2024-07-15 10:29:50.716535] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20b53c0 name raid_bdev1, state offline 00:22:26.049 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.049 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:26.308 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:26.308 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:26.308 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:26.308 10:29:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:26.308 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:26.308 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:26.566 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:26.566 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:22:26.825 [2024-07-15 10:29:51.562592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:26.825 [2024-07-15 10:29:51.563562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:26.825 [2024-07-15 10:29:51.563602] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:26.825 [2024-07-15 10:29:51.563633] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:26.825 [2024-07-15 10:29:51.563645] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.825 [2024-07-15 10:29:51.563652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21c92b0 name raid_bdev1, state configuring 00:22:26.825 request: 00:22:26.825 { 00:22:26.825 "name": "raid_bdev1", 00:22:26.825 "raid_level": "raid1", 00:22:26.825 "base_bdevs": [ 00:22:26.825 "malloc1", 00:22:26.825 "malloc2" 00:22:26.825 ], 00:22:26.825 "superblock": false, 00:22:26.825 "method": "bdev_raid_create", 00:22:26.825 "req_id": 1 00:22:26.825 } 00:22:26.825 Got JSON-RPC error response 00:22:26.825 response: 00:22:26.825 { 00:22:26.825 "code": -17, 00:22:26.825 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:26.825 } 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.825 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:27.083 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:27.083 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:27.083 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:27.341 [2024-07-15 10:29:51.907451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:27.342 [2024-07-15 10:29:51.907485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:27.342 [2024-07-15 10:29:51.907498] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2032c30 00:22:27.342 [2024-07-15 10:29:51.907506] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:27.342 [2024-07-15 10:29:51.908569] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:27.342 [2024-07-15 10:29:51.908590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:27.342 [2024-07-15 10:29:51.908620] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:27.342 [2024-07-15 10:29:51.908637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:27.342 pt1 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.342 10:29:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.342 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.342 "name": "raid_bdev1", 00:22:27.342 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:27.342 "strip_size_kb": 0, 00:22:27.342 "state": "configuring", 00:22:27.342 "raid_level": "raid1", 00:22:27.342 "superblock": true, 00:22:27.342 "num_base_bdevs": 2, 00:22:27.342 "num_base_bdevs_discovered": 1, 00:22:27.342 "num_base_bdevs_operational": 2, 00:22:27.342 "base_bdevs_list": [ 00:22:27.342 { 00:22:27.342 "name": "pt1", 00:22:27.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:27.342 "is_configured": true, 00:22:27.342 "data_offset": 256, 00:22:27.342 "data_size": 7936 00:22:27.342 }, 00:22:27.342 { 00:22:27.342 "name": null, 00:22:27.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:27.342 "is_configured": false, 00:22:27.342 "data_offset": 256, 00:22:27.342 "data_size": 7936 00:22:27.342 } 00:22:27.342 ] 00:22:27.342 }' 00:22:27.342 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.342 10:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.909 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:22:27.909 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:27.909 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:27.909 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:28.168 [2024-07-15 10:29:52.741741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:28.168 [2024-07-15 10:29:52.741773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.168 [2024-07-15 10:29:52.741785] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21cb1c0 00:22:28.168 [2024-07-15 10:29:52.741793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.168 [2024-07-15 10:29:52.741936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.168 [2024-07-15 10:29:52.741947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:28.168 [2024-07-15 10:29:52.741976] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:28.168 [2024-07-15 10:29:52.741988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:28.168 [2024-07-15 10:29:52.742048] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21ca850 00:22:28.168 [2024-07-15 10:29:52.742054] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:28.168 [2024-07-15 10:29:52.742093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21cc360 00:22:28.168 [2024-07-15 10:29:52.742159] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21ca850 00:22:28.168 [2024-07-15 10:29:52.742165] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21ca850 00:22:28.168 [2024-07-15 10:29:52.742210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.168 pt2 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.168 "name": "raid_bdev1", 00:22:28.168 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:28.168 "strip_size_kb": 0, 00:22:28.168 "state": "online", 00:22:28.168 "raid_level": "raid1", 00:22:28.168 "superblock": true, 00:22:28.168 "num_base_bdevs": 2, 00:22:28.168 "num_base_bdevs_discovered": 2, 00:22:28.168 "num_base_bdevs_operational": 2, 00:22:28.168 "base_bdevs_list": [ 00:22:28.168 { 00:22:28.168 "name": "pt1", 00:22:28.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:28.168 "is_configured": true, 00:22:28.168 "data_offset": 256, 00:22:28.168 "data_size": 7936 00:22:28.168 }, 00:22:28.168 { 00:22:28.168 "name": "pt2", 00:22:28.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.168 "is_configured": true, 00:22:28.168 "data_offset": 256, 00:22:28.168 "data_size": 7936 00:22:28.168 } 00:22:28.168 ] 00:22:28.168 }' 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.168 10:29:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:28.734 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:28.992 [2024-07-15 10:29:53.580075] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:28.992 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:28.992 "name": "raid_bdev1", 00:22:28.992 "aliases": [ 00:22:28.992 "ed2ac30a-a651-43c5-974b-5d8a9ebe1310" 00:22:28.992 ], 00:22:28.992 "product_name": "Raid Volume", 00:22:28.992 "block_size": 4096, 00:22:28.992 "num_blocks": 7936, 00:22:28.992 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:28.992 "md_size": 32, 00:22:28.992 "md_interleave": false, 00:22:28.992 "dif_type": 0, 00:22:28.992 "assigned_rate_limits": { 00:22:28.992 "rw_ios_per_sec": 0, 00:22:28.992 "rw_mbytes_per_sec": 0, 00:22:28.992 "r_mbytes_per_sec": 0, 00:22:28.992 "w_mbytes_per_sec": 0 00:22:28.992 }, 00:22:28.992 "claimed": false, 00:22:28.992 "zoned": false, 00:22:28.992 "supported_io_types": { 00:22:28.992 "read": true, 00:22:28.992 "write": true, 00:22:28.992 "unmap": false, 00:22:28.992 "flush": false, 00:22:28.992 "reset": true, 00:22:28.992 "nvme_admin": false, 00:22:28.992 "nvme_io": false, 00:22:28.992 "nvme_io_md": false, 00:22:28.992 "write_zeroes": true, 00:22:28.992 "zcopy": false, 00:22:28.992 "get_zone_info": false, 00:22:28.992 "zone_management": false, 00:22:28.992 "zone_append": false, 00:22:28.992 "compare": false, 00:22:28.992 "compare_and_write": false, 00:22:28.992 "abort": false, 00:22:28.992 "seek_hole": false, 00:22:28.992 "seek_data": false, 00:22:28.992 "copy": false, 00:22:28.992 "nvme_iov_md": false 00:22:28.992 }, 00:22:28.992 "memory_domains": [ 00:22:28.992 { 00:22:28.992 "dma_device_id": "system", 00:22:28.992 "dma_device_type": 1 00:22:28.992 }, 00:22:28.992 { 00:22:28.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.992 "dma_device_type": 2 00:22:28.992 }, 00:22:28.992 { 00:22:28.992 "dma_device_id": "system", 00:22:28.992 "dma_device_type": 1 00:22:28.992 }, 00:22:28.992 { 00:22:28.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.992 "dma_device_type": 2 00:22:28.992 } 00:22:28.992 ], 00:22:28.992 "driver_specific": { 00:22:28.992 "raid": { 00:22:28.992 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:28.992 "strip_size_kb": 0, 00:22:28.992 "state": "online", 00:22:28.992 "raid_level": "raid1", 00:22:28.992 "superblock": true, 00:22:28.992 "num_base_bdevs": 2, 00:22:28.992 "num_base_bdevs_discovered": 2, 00:22:28.992 "num_base_bdevs_operational": 2, 00:22:28.992 "base_bdevs_list": [ 00:22:28.992 { 00:22:28.992 "name": "pt1", 00:22:28.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:28.992 "is_configured": true, 00:22:28.992 "data_offset": 256, 00:22:28.992 "data_size": 7936 00:22:28.992 }, 00:22:28.992 { 00:22:28.992 "name": "pt2", 00:22:28.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:28.992 "is_configured": true, 00:22:28.992 "data_offset": 256, 00:22:28.992 "data_size": 7936 00:22:28.992 } 00:22:28.992 ] 00:22:28.992 } 00:22:28.992 } 00:22:28.992 }' 00:22:28.992 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:28.992 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:28.992 pt2' 00:22:28.992 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:28.992 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:28.992 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:29.251 "name": "pt1", 00:22:29.251 "aliases": [ 00:22:29.251 "00000000-0000-0000-0000-000000000001" 00:22:29.251 ], 00:22:29.251 "product_name": "passthru", 00:22:29.251 "block_size": 4096, 00:22:29.251 "num_blocks": 8192, 00:22:29.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:29.251 "md_size": 32, 00:22:29.251 "md_interleave": false, 00:22:29.251 "dif_type": 0, 00:22:29.251 "assigned_rate_limits": { 00:22:29.251 "rw_ios_per_sec": 0, 00:22:29.251 "rw_mbytes_per_sec": 0, 00:22:29.251 "r_mbytes_per_sec": 0, 00:22:29.251 "w_mbytes_per_sec": 0 00:22:29.251 }, 00:22:29.251 "claimed": true, 00:22:29.251 "claim_type": "exclusive_write", 00:22:29.251 "zoned": false, 00:22:29.251 "supported_io_types": { 00:22:29.251 "read": true, 00:22:29.251 "write": true, 00:22:29.251 "unmap": true, 00:22:29.251 "flush": true, 00:22:29.251 "reset": true, 00:22:29.251 "nvme_admin": false, 00:22:29.251 "nvme_io": false, 00:22:29.251 "nvme_io_md": false, 00:22:29.251 "write_zeroes": true, 00:22:29.251 "zcopy": true, 00:22:29.251 "get_zone_info": false, 00:22:29.251 "zone_management": false, 00:22:29.251 "zone_append": false, 00:22:29.251 "compare": false, 00:22:29.251 "compare_and_write": false, 00:22:29.251 "abort": true, 00:22:29.251 "seek_hole": false, 00:22:29.251 "seek_data": false, 00:22:29.251 "copy": true, 00:22:29.251 "nvme_iov_md": false 00:22:29.251 }, 00:22:29.251 "memory_domains": [ 00:22:29.251 { 00:22:29.251 "dma_device_id": "system", 00:22:29.251 "dma_device_type": 1 00:22:29.251 }, 00:22:29.251 { 00:22:29.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.251 "dma_device_type": 2 00:22:29.251 } 00:22:29.251 ], 00:22:29.251 "driver_specific": { 00:22:29.251 "passthru": { 00:22:29.251 "name": "pt1", 00:22:29.251 "base_bdev_name": "malloc1" 00:22:29.251 } 00:22:29.251 } 00:22:29.251 }' 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.251 10:29:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.251 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:29.251 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:29.509 "name": "pt2", 00:22:29.509 "aliases": [ 00:22:29.509 "00000000-0000-0000-0000-000000000002" 00:22:29.509 ], 00:22:29.509 "product_name": "passthru", 00:22:29.509 "block_size": 4096, 00:22:29.509 "num_blocks": 8192, 00:22:29.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:29.509 "md_size": 32, 00:22:29.509 "md_interleave": false, 00:22:29.509 "dif_type": 0, 00:22:29.509 "assigned_rate_limits": { 00:22:29.509 "rw_ios_per_sec": 0, 00:22:29.509 "rw_mbytes_per_sec": 0, 00:22:29.509 "r_mbytes_per_sec": 0, 00:22:29.509 "w_mbytes_per_sec": 0 00:22:29.509 }, 00:22:29.509 "claimed": true, 00:22:29.509 "claim_type": "exclusive_write", 00:22:29.509 "zoned": false, 00:22:29.509 "supported_io_types": { 00:22:29.509 "read": true, 00:22:29.509 "write": true, 00:22:29.509 "unmap": true, 00:22:29.509 "flush": true, 00:22:29.509 "reset": true, 00:22:29.509 "nvme_admin": false, 00:22:29.509 "nvme_io": false, 00:22:29.509 "nvme_io_md": false, 00:22:29.509 "write_zeroes": true, 00:22:29.509 "zcopy": true, 00:22:29.509 "get_zone_info": false, 00:22:29.509 "zone_management": false, 00:22:29.509 "zone_append": false, 00:22:29.509 "compare": false, 00:22:29.509 "compare_and_write": false, 00:22:29.509 "abort": true, 00:22:29.509 "seek_hole": false, 00:22:29.509 "seek_data": false, 00:22:29.509 "copy": true, 00:22:29.509 "nvme_iov_md": false 00:22:29.509 }, 00:22:29.509 "memory_domains": [ 00:22:29.509 { 00:22:29.509 "dma_device_id": "system", 00:22:29.509 "dma_device_type": 1 00:22:29.509 }, 00:22:29.509 { 00:22:29.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.509 "dma_device_type": 2 00:22:29.509 } 00:22:29.509 ], 00:22:29.509 "driver_specific": { 00:22:29.509 "passthru": { 00:22:29.509 "name": "pt2", 00:22:29.509 "base_bdev_name": "malloc2" 00:22:29.509 } 00:22:29.509 } 00:22:29.509 }' 00:22:29.509 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:29.768 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:30.026 [2024-07-15 10:29:54.739052] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' ed2ac30a-a651-43c5-974b-5d8a9ebe1310 '!=' ed2ac30a-a651-43c5-974b-5d8a9ebe1310 ']' 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:22:30.026 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:30.284 [2024-07-15 10:29:54.895306] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.284 10:29:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.542 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.542 "name": "raid_bdev1", 00:22:30.542 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:30.542 "strip_size_kb": 0, 00:22:30.542 "state": "online", 00:22:30.542 "raid_level": "raid1", 00:22:30.542 "superblock": true, 00:22:30.542 "num_base_bdevs": 2, 00:22:30.542 "num_base_bdevs_discovered": 1, 00:22:30.542 "num_base_bdevs_operational": 1, 00:22:30.542 "base_bdevs_list": [ 00:22:30.542 { 00:22:30.542 "name": null, 00:22:30.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.542 "is_configured": false, 00:22:30.542 "data_offset": 256, 00:22:30.542 "data_size": 7936 00:22:30.542 }, 00:22:30.542 { 00:22:30.542 "name": "pt2", 00:22:30.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.542 "is_configured": true, 00:22:30.542 "data_offset": 256, 00:22:30.542 "data_size": 7936 00:22:30.542 } 00:22:30.542 ] 00:22:30.542 }' 00:22:30.542 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.542 10:29:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.801 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:31.059 [2024-07-15 10:29:55.705375] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.059 [2024-07-15 10:29:55.705392] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:31.059 [2024-07-15 10:29:55.705421] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.059 [2024-07-15 10:29:55.705448] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.059 [2024-07-15 10:29:55.705455] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21ca850 name raid_bdev1, state offline 00:22:31.059 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.059 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:22:31.316 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:22:31.316 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:22:31.316 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:22:31.316 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:31.317 10:29:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:31.317 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:22:31.317 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:22:31.317 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:22:31.317 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:22:31.317 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:22:31.317 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:31.575 [2024-07-15 10:29:56.222683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:31.575 [2024-07-15 10:29:56.222716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.575 [2024-07-15 10:29:56.222728] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21caf50 00:22:31.575 [2024-07-15 10:29:56.222736] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.575 [2024-07-15 10:29:56.223766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.575 [2024-07-15 10:29:56.223787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:31.575 [2024-07-15 10:29:56.223819] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:31.575 [2024-07-15 10:29:56.223835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:31.575 [2024-07-15 10:29:56.223886] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21c8600 00:22:31.575 [2024-07-15 10:29:56.223893] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:31.575 [2024-07-15 10:29:56.223936] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2033be0 00:22:31.575 [2024-07-15 10:29:56.224001] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21c8600 00:22:31.575 [2024-07-15 10:29:56.224007] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21c8600 00:22:31.575 [2024-07-15 10:29:56.224048] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.575 pt2 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.575 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.894 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.894 "name": "raid_bdev1", 00:22:31.894 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:31.894 "strip_size_kb": 0, 00:22:31.894 "state": "online", 00:22:31.894 "raid_level": "raid1", 00:22:31.894 "superblock": true, 00:22:31.894 "num_base_bdevs": 2, 00:22:31.894 "num_base_bdevs_discovered": 1, 00:22:31.894 "num_base_bdevs_operational": 1, 00:22:31.894 "base_bdevs_list": [ 00:22:31.894 { 00:22:31.894 "name": null, 00:22:31.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.894 "is_configured": false, 00:22:31.894 "data_offset": 256, 00:22:31.894 "data_size": 7936 00:22:31.894 }, 00:22:31.894 { 00:22:31.894 "name": "pt2", 00:22:31.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:31.894 "is_configured": true, 00:22:31.894 "data_offset": 256, 00:22:31.894 "data_size": 7936 00:22:31.894 } 00:22:31.894 ] 00:22:31.894 }' 00:22:31.894 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.894 10:29:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.152 10:29:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:32.411 [2024-07-15 10:29:57.080878] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.411 [2024-07-15 10:29:57.080895] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.411 [2024-07-15 10:29:57.080935] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.411 [2024-07-15 10:29:57.080962] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.411 [2024-07-15 10:29:57.080969] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21c8600 name raid_bdev1, state offline 00:22:32.411 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.411 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:32.670 [2024-07-15 10:29:57.405723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:32.670 [2024-07-15 10:29:57.405759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.670 [2024-07-15 10:29:57.405773] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2033180 00:22:32.670 [2024-07-15 10:29:57.405781] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.670 [2024-07-15 10:29:57.406823] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.670 [2024-07-15 10:29:57.406844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:32.670 [2024-07-15 10:29:57.406879] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:32.670 [2024-07-15 10:29:57.406896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:32.670 [2024-07-15 10:29:57.406965] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:32.670 [2024-07-15 10:29:57.406974] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.670 [2024-07-15 10:29:57.406983] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21ce7a0 name raid_bdev1, state configuring 00:22:32.670 [2024-07-15 10:29:57.406999] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:32.670 [2024-07-15 10:29:57.407033] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21cb840 00:22:32.670 [2024-07-15 10:29:57.407040] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:32.670 [2024-07-15 10:29:57.407083] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21ccb90 00:22:32.670 [2024-07-15 10:29:57.407146] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21cb840 00:22:32.670 [2024-07-15 10:29:57.407152] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x21cb840 00:22:32.670 [2024-07-15 10:29:57.407197] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.670 pt1 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.670 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.928 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.928 "name": "raid_bdev1", 00:22:32.928 "uuid": "ed2ac30a-a651-43c5-974b-5d8a9ebe1310", 00:22:32.928 "strip_size_kb": 0, 00:22:32.928 "state": "online", 00:22:32.928 "raid_level": "raid1", 00:22:32.928 "superblock": true, 00:22:32.928 "num_base_bdevs": 2, 00:22:32.928 "num_base_bdevs_discovered": 1, 00:22:32.928 "num_base_bdevs_operational": 1, 00:22:32.928 "base_bdevs_list": [ 00:22:32.928 { 00:22:32.928 "name": null, 00:22:32.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.928 "is_configured": false, 00:22:32.928 "data_offset": 256, 00:22:32.928 "data_size": 7936 00:22:32.928 }, 00:22:32.928 { 00:22:32.928 "name": "pt2", 00:22:32.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:32.928 "is_configured": true, 00:22:32.928 "data_offset": 256, 00:22:32.928 "data_size": 7936 00:22:32.928 } 00:22:32.928 ] 00:22:32.928 }' 00:22:32.928 10:29:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.928 10:29:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.495 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:33.495 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:33.495 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:22:33.495 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:33.495 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:22:33.754 [2024-07-15 10:29:58.396419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' ed2ac30a-a651-43c5-974b-5d8a9ebe1310 '!=' ed2ac30a-a651-43c5-974b-5d8a9ebe1310 ']' 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 1892690 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 1892690 ']' 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 1892690 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1892690 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1892690' 00:22:33.754 killing process with pid 1892690 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 1892690 00:22:33.754 [2024-07-15 10:29:58.466928] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.754 [2024-07-15 10:29:58.466970] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:33.754 [2024-07-15 10:29:58.467000] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:33.754 [2024-07-15 10:29:58.467007] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21cb840 name raid_bdev1, state offline 00:22:33.754 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 1892690 00:22:33.754 [2024-07-15 10:29:58.485963] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:34.014 10:29:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:22:34.014 00:22:34.014 real 0m11.861s 00:22:34.014 user 0m21.346s 00:22:34.014 sys 0m2.357s 00:22:34.014 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:34.014 10:29:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.014 ************************************ 00:22:34.014 END TEST raid_superblock_test_md_separate 00:22:34.014 ************************************ 00:22:34.014 10:29:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:34.014 10:29:58 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:22:34.014 10:29:58 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:34.014 10:29:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:22:34.014 10:29:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:34.014 10:29:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:34.014 ************************************ 00:22:34.014 START TEST raid_rebuild_test_sb_md_separate 00:22:34.014 ************************************ 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=1895109 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 1895109 /var/tmp/spdk-raid.sock 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 1895109 ']' 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:34.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.014 10:29:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.014 [2024-07-15 10:29:58.781365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:34.014 [2024-07-15 10:29:58.781407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895109 ] 00:22:34.014 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:34.014 Zero copy mechanism will not be used. 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:34.273 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:34.273 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:34.273 [2024-07-15 10:29:58.871636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.273 [2024-07-15 10:29:58.945027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.273 [2024-07-15 10:29:58.999020] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.273 [2024-07-15 10:29:58.999063] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.840 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.840 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:22:34.840 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:34.840 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:35.098 BaseBdev1_malloc 00:22:35.098 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:35.358 [2024-07-15 10:29:59.891214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:35.358 [2024-07-15 10:29:59.891249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.358 [2024-07-15 10:29:59.891270] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdfc0 00:22:35.358 [2024-07-15 10:29:59.891279] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.358 [2024-07-15 10:29:59.892317] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.358 [2024-07-15 10:29:59.892339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:35.358 BaseBdev1 00:22:35.358 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:22:35.358 10:29:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:35.358 BaseBdev2_malloc 00:22:35.358 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:35.616 [2024-07-15 10:30:00.228528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:35.616 [2024-07-15 10:30:00.228562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.616 [2024-07-15 10:30:00.228579] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e11f0 00:22:35.616 [2024-07-15 10:30:00.228587] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.616 [2024-07-15 10:30:00.229545] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.616 [2024-07-15 10:30:00.229565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:35.616 BaseBdev2 00:22:35.616 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:35.874 spare_malloc 00:22:35.874 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:35.874 spare_delay 00:22:35.874 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:36.133 [2024-07-15 10:30:00.730070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:36.133 [2024-07-15 10:30:00.730101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.133 [2024-07-15 10:30:00.730118] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e4230 00:22:36.133 [2024-07-15 10:30:00.730127] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.133 [2024-07-15 10:30:00.731028] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.133 [2024-07-15 10:30:00.731050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:36.133 spare 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:36.133 [2024-07-15 10:30:00.882490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.133 [2024-07-15 10:30:00.883329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:36.133 [2024-07-15 10:30:00.883442] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11e4fb0 00:22:36.133 [2024-07-15 10:30:00.883451] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:36.133 [2024-07-15 10:30:00.883501] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x104c210 00:22:36.133 [2024-07-15 10:30:00.883574] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11e4fb0 00:22:36.133 [2024-07-15 10:30:00.883581] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11e4fb0 00:22:36.133 [2024-07-15 10:30:00.883631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.133 10:30:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.391 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.391 "name": "raid_bdev1", 00:22:36.391 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:36.391 "strip_size_kb": 0, 00:22:36.391 "state": "online", 00:22:36.391 "raid_level": "raid1", 00:22:36.391 "superblock": true, 00:22:36.391 "num_base_bdevs": 2, 00:22:36.391 "num_base_bdevs_discovered": 2, 00:22:36.391 "num_base_bdevs_operational": 2, 00:22:36.391 "base_bdevs_list": [ 00:22:36.391 { 00:22:36.391 "name": "BaseBdev1", 00:22:36.391 "uuid": "ba069461-065c-54a4-8f5e-7e1be8d8e2fc", 00:22:36.391 "is_configured": true, 00:22:36.391 "data_offset": 256, 00:22:36.391 "data_size": 7936 00:22:36.391 }, 00:22:36.391 { 00:22:36.391 "name": "BaseBdev2", 00:22:36.391 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:36.391 "is_configured": true, 00:22:36.391 "data_offset": 256, 00:22:36.391 "data_size": 7936 00:22:36.391 } 00:22:36.391 ] 00:22:36.391 }' 00:22:36.391 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.391 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.957 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:22:36.957 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:36.957 [2024-07-15 10:30:01.684698] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.957 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:22:36.958 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.958 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.216 10:30:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:37.474 [2024-07-15 10:30:02.029452] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e8780 00:22:37.474 /dev/nbd0 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:37.474 1+0 records in 00:22:37.474 1+0 records out 00:22:37.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275679 s, 14.9 MB/s 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:22:37.474 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:38.041 7936+0 records in 00:22:38.041 7936+0 records out 00:22:38.041 32505856 bytes (33 MB, 31 MiB) copied, 0.493188 s, 65.9 MB/s 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:38.041 [2024-07-15 10:30:02.781788] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.041 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:38.298 [2024-07-15 10:30:02.946254] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.298 10:30:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.567 10:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.567 "name": "raid_bdev1", 00:22:38.567 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:38.567 "strip_size_kb": 0, 00:22:38.567 "state": "online", 00:22:38.567 "raid_level": "raid1", 00:22:38.567 "superblock": true, 00:22:38.567 "num_base_bdevs": 2, 00:22:38.567 "num_base_bdevs_discovered": 1, 00:22:38.567 "num_base_bdevs_operational": 1, 00:22:38.567 "base_bdevs_list": [ 00:22:38.567 { 00:22:38.567 "name": null, 00:22:38.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.567 "is_configured": false, 00:22:38.567 "data_offset": 256, 00:22:38.567 "data_size": 7936 00:22:38.567 }, 00:22:38.567 { 00:22:38.567 "name": "BaseBdev2", 00:22:38.567 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:38.567 "is_configured": true, 00:22:38.567 "data_offset": 256, 00:22:38.567 "data_size": 7936 00:22:38.567 } 00:22:38.567 ] 00:22:38.567 }' 00:22:38.567 10:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.567 10:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.135 10:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:39.135 [2024-07-15 10:30:03.780408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.135 [2024-07-15 10:30:03.782398] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e2ef0 00:22:39.135 [2024-07-15 10:30:03.783906] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:39.135 10:30:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.068 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.326 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:40.326 "name": "raid_bdev1", 00:22:40.326 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:40.326 "strip_size_kb": 0, 00:22:40.326 "state": "online", 00:22:40.326 "raid_level": "raid1", 00:22:40.327 "superblock": true, 00:22:40.327 "num_base_bdevs": 2, 00:22:40.327 "num_base_bdevs_discovered": 2, 00:22:40.327 "num_base_bdevs_operational": 2, 00:22:40.327 "process": { 00:22:40.327 "type": "rebuild", 00:22:40.327 "target": "spare", 00:22:40.327 "progress": { 00:22:40.327 "blocks": 2816, 00:22:40.327 "percent": 35 00:22:40.327 } 00:22:40.327 }, 00:22:40.327 "base_bdevs_list": [ 00:22:40.327 { 00:22:40.327 "name": "spare", 00:22:40.327 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:40.327 "is_configured": true, 00:22:40.327 "data_offset": 256, 00:22:40.327 "data_size": 7936 00:22:40.327 }, 00:22:40.327 { 00:22:40.327 "name": "BaseBdev2", 00:22:40.327 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:40.327 "is_configured": true, 00:22:40.327 "data_offset": 256, 00:22:40.327 "data_size": 7936 00:22:40.327 } 00:22:40.327 ] 00:22:40.327 }' 00:22:40.327 10:30:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:40.327 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:40.327 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:40.327 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.327 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:40.584 [2024-07-15 10:30:05.228636] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.584 [2024-07-15 10:30:05.294425] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:40.584 [2024-07-15 10:30:05.294458] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.584 [2024-07-15 10:30:05.294468] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:40.584 [2024-07-15 10:30:05.294489] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.584 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.842 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.842 "name": "raid_bdev1", 00:22:40.842 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:40.842 "strip_size_kb": 0, 00:22:40.842 "state": "online", 00:22:40.842 "raid_level": "raid1", 00:22:40.842 "superblock": true, 00:22:40.842 "num_base_bdevs": 2, 00:22:40.842 "num_base_bdevs_discovered": 1, 00:22:40.842 "num_base_bdevs_operational": 1, 00:22:40.842 "base_bdevs_list": [ 00:22:40.842 { 00:22:40.842 "name": null, 00:22:40.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.842 "is_configured": false, 00:22:40.842 "data_offset": 256, 00:22:40.842 "data_size": 7936 00:22:40.842 }, 00:22:40.842 { 00:22:40.843 "name": "BaseBdev2", 00:22:40.843 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:40.843 "is_configured": true, 00:22:40.843 "data_offset": 256, 00:22:40.843 "data_size": 7936 00:22:40.843 } 00:22:40.843 ] 00:22:40.843 }' 00:22:40.843 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.843 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.409 10:30:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.409 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.409 "name": "raid_bdev1", 00:22:41.409 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:41.409 "strip_size_kb": 0, 00:22:41.409 "state": "online", 00:22:41.409 "raid_level": "raid1", 00:22:41.409 "superblock": true, 00:22:41.409 "num_base_bdevs": 2, 00:22:41.409 "num_base_bdevs_discovered": 1, 00:22:41.409 "num_base_bdevs_operational": 1, 00:22:41.409 "base_bdevs_list": [ 00:22:41.409 { 00:22:41.409 "name": null, 00:22:41.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.409 "is_configured": false, 00:22:41.409 "data_offset": 256, 00:22:41.409 "data_size": 7936 00:22:41.409 }, 00:22:41.409 { 00:22:41.409 "name": "BaseBdev2", 00:22:41.409 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:41.409 "is_configured": true, 00:22:41.409 "data_offset": 256, 00:22:41.409 "data_size": 7936 00:22:41.409 } 00:22:41.409 ] 00:22:41.409 }' 00:22:41.409 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:41.409 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:41.409 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:41.667 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:41.667 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:41.667 [2024-07-15 10:30:06.372083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:41.667 [2024-07-15 10:30:06.374064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e5420 00:22:41.667 [2024-07-15 10:30:06.375098] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:41.667 10:30:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.042 "name": "raid_bdev1", 00:22:43.042 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:43.042 "strip_size_kb": 0, 00:22:43.042 "state": "online", 00:22:43.042 "raid_level": "raid1", 00:22:43.042 "superblock": true, 00:22:43.042 "num_base_bdevs": 2, 00:22:43.042 "num_base_bdevs_discovered": 2, 00:22:43.042 "num_base_bdevs_operational": 2, 00:22:43.042 "process": { 00:22:43.042 "type": "rebuild", 00:22:43.042 "target": "spare", 00:22:43.042 "progress": { 00:22:43.042 "blocks": 2816, 00:22:43.042 "percent": 35 00:22:43.042 } 00:22:43.042 }, 00:22:43.042 "base_bdevs_list": [ 00:22:43.042 { 00:22:43.042 "name": "spare", 00:22:43.042 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:43.042 "is_configured": true, 00:22:43.042 "data_offset": 256, 00:22:43.042 "data_size": 7936 00:22:43.042 }, 00:22:43.042 { 00:22:43.042 "name": "BaseBdev2", 00:22:43.042 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:43.042 "is_configured": true, 00:22:43.042 "data_offset": 256, 00:22:43.042 "data_size": 7936 00:22:43.042 } 00:22:43.042 ] 00:22:43.042 }' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:22:43.042 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=828 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.042 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:43.042 "name": "raid_bdev1", 00:22:43.042 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:43.042 "strip_size_kb": 0, 00:22:43.042 "state": "online", 00:22:43.042 "raid_level": "raid1", 00:22:43.042 "superblock": true, 00:22:43.042 "num_base_bdevs": 2, 00:22:43.042 "num_base_bdevs_discovered": 2, 00:22:43.042 "num_base_bdevs_operational": 2, 00:22:43.042 "process": { 00:22:43.042 "type": "rebuild", 00:22:43.042 "target": "spare", 00:22:43.042 "progress": { 00:22:43.042 "blocks": 3584, 00:22:43.042 "percent": 45 00:22:43.042 } 00:22:43.042 }, 00:22:43.042 "base_bdevs_list": [ 00:22:43.042 { 00:22:43.042 "name": "spare", 00:22:43.042 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:43.042 "is_configured": true, 00:22:43.042 "data_offset": 256, 00:22:43.042 "data_size": 7936 00:22:43.042 }, 00:22:43.042 { 00:22:43.043 "name": "BaseBdev2", 00:22:43.043 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:43.043 "is_configured": true, 00:22:43.043 "data_offset": 256, 00:22:43.043 "data_size": 7936 00:22:43.043 } 00:22:43.043 ] 00:22:43.043 }' 00:22:43.043 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:43.301 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.301 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:43.301 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.302 10:30:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.238 10:30:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.497 10:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.497 "name": "raid_bdev1", 00:22:44.497 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:44.497 "strip_size_kb": 0, 00:22:44.497 "state": "online", 00:22:44.497 "raid_level": "raid1", 00:22:44.497 "superblock": true, 00:22:44.497 "num_base_bdevs": 2, 00:22:44.497 "num_base_bdevs_discovered": 2, 00:22:44.497 "num_base_bdevs_operational": 2, 00:22:44.497 "process": { 00:22:44.497 "type": "rebuild", 00:22:44.497 "target": "spare", 00:22:44.497 "progress": { 00:22:44.497 "blocks": 6656, 00:22:44.497 "percent": 83 00:22:44.497 } 00:22:44.497 }, 00:22:44.497 "base_bdevs_list": [ 00:22:44.497 { 00:22:44.497 "name": "spare", 00:22:44.497 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:44.497 "is_configured": true, 00:22:44.497 "data_offset": 256, 00:22:44.497 "data_size": 7936 00:22:44.497 }, 00:22:44.497 { 00:22:44.497 "name": "BaseBdev2", 00:22:44.497 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:44.497 "is_configured": true, 00:22:44.497 "data_offset": 256, 00:22:44.497 "data_size": 7936 00:22:44.497 } 00:22:44.497 ] 00:22:44.497 }' 00:22:44.497 10:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:44.497 10:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.497 10:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:44.497 10:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.497 10:30:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:22:44.755 [2024-07-15 10:30:09.496628] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:44.755 [2024-07-15 10:30:09.496670] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:44.755 [2024-07-15 10:30:09.496738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.691 "name": "raid_bdev1", 00:22:45.691 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:45.691 "strip_size_kb": 0, 00:22:45.691 "state": "online", 00:22:45.691 "raid_level": "raid1", 00:22:45.691 "superblock": true, 00:22:45.691 "num_base_bdevs": 2, 00:22:45.691 "num_base_bdevs_discovered": 2, 00:22:45.691 "num_base_bdevs_operational": 2, 00:22:45.691 "base_bdevs_list": [ 00:22:45.691 { 00:22:45.691 "name": "spare", 00:22:45.691 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:45.691 "is_configured": true, 00:22:45.691 "data_offset": 256, 00:22:45.691 "data_size": 7936 00:22:45.691 }, 00:22:45.691 { 00:22:45.691 "name": "BaseBdev2", 00:22:45.691 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:45.691 "is_configured": true, 00:22:45.691 "data_offset": 256, 00:22:45.691 "data_size": 7936 00:22:45.691 } 00:22:45.691 ] 00:22:45.691 }' 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.691 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.959 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.959 "name": "raid_bdev1", 00:22:45.959 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:45.959 "strip_size_kb": 0, 00:22:45.959 "state": "online", 00:22:45.959 "raid_level": "raid1", 00:22:45.959 "superblock": true, 00:22:45.959 "num_base_bdevs": 2, 00:22:45.959 "num_base_bdevs_discovered": 2, 00:22:45.959 "num_base_bdevs_operational": 2, 00:22:45.959 "base_bdevs_list": [ 00:22:45.959 { 00:22:45.959 "name": "spare", 00:22:45.959 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:45.960 "is_configured": true, 00:22:45.960 "data_offset": 256, 00:22:45.960 "data_size": 7936 00:22:45.960 }, 00:22:45.960 { 00:22:45.960 "name": "BaseBdev2", 00:22:45.960 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:45.960 "is_configured": true, 00:22:45.960 "data_offset": 256, 00:22:45.960 "data_size": 7936 00:22:45.960 } 00:22:45.960 ] 00:22:45.960 }' 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.960 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.217 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.217 "name": "raid_bdev1", 00:22:46.217 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:46.217 "strip_size_kb": 0, 00:22:46.217 "state": "online", 00:22:46.217 "raid_level": "raid1", 00:22:46.217 "superblock": true, 00:22:46.217 "num_base_bdevs": 2, 00:22:46.217 "num_base_bdevs_discovered": 2, 00:22:46.217 "num_base_bdevs_operational": 2, 00:22:46.217 "base_bdevs_list": [ 00:22:46.217 { 00:22:46.217 "name": "spare", 00:22:46.217 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:46.217 "is_configured": true, 00:22:46.217 "data_offset": 256, 00:22:46.217 "data_size": 7936 00:22:46.217 }, 00:22:46.217 { 00:22:46.217 "name": "BaseBdev2", 00:22:46.217 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:46.217 "is_configured": true, 00:22:46.217 "data_offset": 256, 00:22:46.217 "data_size": 7936 00:22:46.217 } 00:22:46.217 ] 00:22:46.217 }' 00:22:46.217 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.217 10:30:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:46.782 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:46.782 [2024-07-15 10:30:11.492666] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.782 [2024-07-15 10:30:11.492689] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.782 [2024-07-15 10:30:11.492731] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.782 [2024-07-15 10:30:11.492770] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.782 [2024-07-15 10:30:11.492778] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e4fb0 name raid_bdev1, state offline 00:22:46.782 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.782 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:47.039 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:47.297 /dev/nbd0 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:47.297 1+0 records in 00:22:47.297 1+0 records out 00:22:47.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203263 s, 20.2 MB/s 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:47.297 10:30:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:47.297 /dev/nbd1 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:47.555 1+0 records in 00:22:47.555 1+0 records out 00:22:47.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198768 s, 20.6 MB/s 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:47.555 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:47.556 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:47.813 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:22:47.814 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:48.071 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:48.330 [2024-07-15 10:30:12.886698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:48.330 [2024-07-15 10:30:12.886729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.330 [2024-07-15 10:30:12.886743] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11e3b30 00:22:48.330 [2024-07-15 10:30:12.886751] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.330 [2024-07-15 10:30:12.887781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.330 [2024-07-15 10:30:12.887803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:48.330 [2024-07-15 10:30:12.887840] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:48.330 [2024-07-15 10:30:12.887856] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:48.330 [2024-07-15 10:30:12.887927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.330 spare 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.330 10:30:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.330 [2024-07-15 10:30:12.988213] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x11e5cd0 00:22:48.330 [2024-07-15 10:30:12.988226] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:48.330 [2024-07-15 10:30:12.988281] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e5a10 00:22:48.330 [2024-07-15 10:30:12.988367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11e5cd0 00:22:48.330 [2024-07-15 10:30:12.988375] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11e5cd0 00:22:48.330 [2024-07-15 10:30:12.988427] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.330 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.330 "name": "raid_bdev1", 00:22:48.330 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:48.330 "strip_size_kb": 0, 00:22:48.330 "state": "online", 00:22:48.330 "raid_level": "raid1", 00:22:48.330 "superblock": true, 00:22:48.330 "num_base_bdevs": 2, 00:22:48.330 "num_base_bdevs_discovered": 2, 00:22:48.330 "num_base_bdevs_operational": 2, 00:22:48.330 "base_bdevs_list": [ 00:22:48.330 { 00:22:48.330 "name": "spare", 00:22:48.330 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:48.330 "is_configured": true, 00:22:48.330 "data_offset": 256, 00:22:48.330 "data_size": 7936 00:22:48.330 }, 00:22:48.330 { 00:22:48.330 "name": "BaseBdev2", 00:22:48.330 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:48.330 "is_configured": true, 00:22:48.330 "data_offset": 256, 00:22:48.330 "data_size": 7936 00:22:48.330 } 00:22:48.330 ] 00:22:48.330 }' 00:22:48.330 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.330 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.904 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:49.161 "name": "raid_bdev1", 00:22:49.161 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:49.161 "strip_size_kb": 0, 00:22:49.161 "state": "online", 00:22:49.161 "raid_level": "raid1", 00:22:49.161 "superblock": true, 00:22:49.161 "num_base_bdevs": 2, 00:22:49.161 "num_base_bdevs_discovered": 2, 00:22:49.161 "num_base_bdevs_operational": 2, 00:22:49.161 "base_bdevs_list": [ 00:22:49.161 { 00:22:49.161 "name": "spare", 00:22:49.161 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:49.161 "is_configured": true, 00:22:49.161 "data_offset": 256, 00:22:49.161 "data_size": 7936 00:22:49.161 }, 00:22:49.161 { 00:22:49.161 "name": "BaseBdev2", 00:22:49.161 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:49.161 "is_configured": true, 00:22:49.161 "data_offset": 256, 00:22:49.161 "data_size": 7936 00:22:49.161 } 00:22:49.161 ] 00:22:49.161 }' 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.161 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:49.418 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:22:49.418 10:30:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:49.418 [2024-07-15 10:30:14.129968] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.418 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.676 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.676 "name": "raid_bdev1", 00:22:49.676 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:49.676 "strip_size_kb": 0, 00:22:49.676 "state": "online", 00:22:49.676 "raid_level": "raid1", 00:22:49.676 "superblock": true, 00:22:49.676 "num_base_bdevs": 2, 00:22:49.676 "num_base_bdevs_discovered": 1, 00:22:49.676 "num_base_bdevs_operational": 1, 00:22:49.676 "base_bdevs_list": [ 00:22:49.676 { 00:22:49.676 "name": null, 00:22:49.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.676 "is_configured": false, 00:22:49.676 "data_offset": 256, 00:22:49.676 "data_size": 7936 00:22:49.676 }, 00:22:49.676 { 00:22:49.676 "name": "BaseBdev2", 00:22:49.676 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:49.676 "is_configured": true, 00:22:49.676 "data_offset": 256, 00:22:49.676 "data_size": 7936 00:22:49.676 } 00:22:49.676 ] 00:22:49.676 }' 00:22:49.676 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.676 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:50.243 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:50.243 [2024-07-15 10:30:14.968126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.243 [2024-07-15 10:30:14.968228] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:50.243 [2024-07-15 10:30:14.968239] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:50.243 [2024-07-15 10:30:14.968259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.243 [2024-07-15 10:30:14.970151] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e84c0 00:22:50.243 [2024-07-15 10:30:14.971739] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:50.243 10:30:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.620 10:30:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.620 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:51.620 "name": "raid_bdev1", 00:22:51.620 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:51.620 "strip_size_kb": 0, 00:22:51.620 "state": "online", 00:22:51.620 "raid_level": "raid1", 00:22:51.620 "superblock": true, 00:22:51.620 "num_base_bdevs": 2, 00:22:51.620 "num_base_bdevs_discovered": 2, 00:22:51.620 "num_base_bdevs_operational": 2, 00:22:51.620 "process": { 00:22:51.620 "type": "rebuild", 00:22:51.620 "target": "spare", 00:22:51.620 "progress": { 00:22:51.620 "blocks": 2816, 00:22:51.620 "percent": 35 00:22:51.620 } 00:22:51.620 }, 00:22:51.620 "base_bdevs_list": [ 00:22:51.620 { 00:22:51.620 "name": "spare", 00:22:51.620 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:51.620 "is_configured": true, 00:22:51.620 "data_offset": 256, 00:22:51.620 "data_size": 7936 00:22:51.620 }, 00:22:51.620 { 00:22:51.620 "name": "BaseBdev2", 00:22:51.620 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:51.620 "is_configured": true, 00:22:51.620 "data_offset": 256, 00:22:51.620 "data_size": 7936 00:22:51.620 } 00:22:51.620 ] 00:22:51.620 }' 00:22:51.620 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:51.620 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.620 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:51.620 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.620 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:51.620 [2024-07-15 10:30:16.404765] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.879 [2024-07-15 10:30:16.482174] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:51.879 [2024-07-15 10:30:16.482205] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.879 [2024-07-15 10:30:16.482214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:51.879 [2024-07-15 10:30:16.482236] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.879 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.138 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.138 "name": "raid_bdev1", 00:22:52.138 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:52.138 "strip_size_kb": 0, 00:22:52.138 "state": "online", 00:22:52.138 "raid_level": "raid1", 00:22:52.138 "superblock": true, 00:22:52.138 "num_base_bdevs": 2, 00:22:52.138 "num_base_bdevs_discovered": 1, 00:22:52.138 "num_base_bdevs_operational": 1, 00:22:52.138 "base_bdevs_list": [ 00:22:52.138 { 00:22:52.138 "name": null, 00:22:52.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.138 "is_configured": false, 00:22:52.138 "data_offset": 256, 00:22:52.138 "data_size": 7936 00:22:52.138 }, 00:22:52.138 { 00:22:52.138 "name": "BaseBdev2", 00:22:52.138 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:52.138 "is_configured": true, 00:22:52.138 "data_offset": 256, 00:22:52.138 "data_size": 7936 00:22:52.138 } 00:22:52.138 ] 00:22:52.138 }' 00:22:52.138 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.138 10:30:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.396 10:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:52.655 [2024-07-15 10:30:17.331186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:52.655 [2024-07-15 10:30:17.331226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.655 [2024-07-15 10:30:17.331244] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104c850 00:22:52.655 [2024-07-15 10:30:17.331252] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.655 [2024-07-15 10:30:17.331409] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.655 [2024-07-15 10:30:17.331420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:52.655 [2024-07-15 10:30:17.331461] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:52.655 [2024-07-15 10:30:17.331468] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:52.655 [2024-07-15 10:30:17.331476] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:52.655 [2024-07-15 10:30:17.331487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.655 [2024-07-15 10:30:17.333383] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11e4c00 00:22:52.655 [2024-07-15 10:30:17.334435] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:52.655 spare 00:22:52.655 10:30:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.590 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.849 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.849 "name": "raid_bdev1", 00:22:53.849 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:53.849 "strip_size_kb": 0, 00:22:53.849 "state": "online", 00:22:53.849 "raid_level": "raid1", 00:22:53.849 "superblock": true, 00:22:53.849 "num_base_bdevs": 2, 00:22:53.849 "num_base_bdevs_discovered": 2, 00:22:53.849 "num_base_bdevs_operational": 2, 00:22:53.849 "process": { 00:22:53.849 "type": "rebuild", 00:22:53.849 "target": "spare", 00:22:53.849 "progress": { 00:22:53.849 "blocks": 2816, 00:22:53.849 "percent": 35 00:22:53.849 } 00:22:53.849 }, 00:22:53.849 "base_bdevs_list": [ 00:22:53.849 { 00:22:53.849 "name": "spare", 00:22:53.849 "uuid": "0e4dca15-6761-5ef9-874b-d87d5c9d35d3", 00:22:53.849 "is_configured": true, 00:22:53.849 "data_offset": 256, 00:22:53.849 "data_size": 7936 00:22:53.849 }, 00:22:53.849 { 00:22:53.849 "name": "BaseBdev2", 00:22:53.849 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:53.849 "is_configured": true, 00:22:53.849 "data_offset": 256, 00:22:53.849 "data_size": 7936 00:22:53.849 } 00:22:53.849 ] 00:22:53.849 }' 00:22:53.849 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:53.849 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.849 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:53.849 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.849 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:54.108 [2024-07-15 10:30:18.771830] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.108 [2024-07-15 10:30:18.844805] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.108 [2024-07-15 10:30:18.844835] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.108 [2024-07-15 10:30:18.844844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.108 [2024-07-15 10:30:18.844866] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.108 10:30:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.366 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.366 "name": "raid_bdev1", 00:22:54.366 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:54.366 "strip_size_kb": 0, 00:22:54.366 "state": "online", 00:22:54.366 "raid_level": "raid1", 00:22:54.366 "superblock": true, 00:22:54.366 "num_base_bdevs": 2, 00:22:54.366 "num_base_bdevs_discovered": 1, 00:22:54.367 "num_base_bdevs_operational": 1, 00:22:54.367 "base_bdevs_list": [ 00:22:54.367 { 00:22:54.367 "name": null, 00:22:54.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.367 "is_configured": false, 00:22:54.367 "data_offset": 256, 00:22:54.367 "data_size": 7936 00:22:54.367 }, 00:22:54.367 { 00:22:54.367 "name": "BaseBdev2", 00:22:54.367 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:54.367 "is_configured": true, 00:22:54.367 "data_offset": 256, 00:22:54.367 "data_size": 7936 00:22:54.367 } 00:22:54.367 ] 00:22:54.367 }' 00:22:54.367 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.367 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.932 "name": "raid_bdev1", 00:22:54.932 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:54.932 "strip_size_kb": 0, 00:22:54.932 "state": "online", 00:22:54.932 "raid_level": "raid1", 00:22:54.932 "superblock": true, 00:22:54.932 "num_base_bdevs": 2, 00:22:54.932 "num_base_bdevs_discovered": 1, 00:22:54.932 "num_base_bdevs_operational": 1, 00:22:54.932 "base_bdevs_list": [ 00:22:54.932 { 00:22:54.932 "name": null, 00:22:54.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.932 "is_configured": false, 00:22:54.932 "data_offset": 256, 00:22:54.932 "data_size": 7936 00:22:54.932 }, 00:22:54.932 { 00:22:54.932 "name": "BaseBdev2", 00:22:54.932 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:54.932 "is_configured": true, 00:22:54.932 "data_offset": 256, 00:22:54.932 "data_size": 7936 00:22:54.932 } 00:22:54.932 ] 00:22:54.932 }' 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:54.932 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:55.190 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:55.190 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:55.190 10:30:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:55.448 [2024-07-15 10:30:20.087783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:55.448 [2024-07-15 10:30:20.087821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:55.448 [2024-07-15 10:30:20.087838] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x104d930 00:22:55.448 [2024-07-15 10:30:20.087847] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:55.448 [2024-07-15 10:30:20.087997] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:55.448 [2024-07-15 10:30:20.088008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:55.448 [2024-07-15 10:30:20.088042] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:55.448 [2024-07-15 10:30:20.088050] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:55.448 [2024-07-15 10:30:20.088057] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:55.448 BaseBdev1 00:22:55.448 10:30:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.382 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.640 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:56.640 "name": "raid_bdev1", 00:22:56.640 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:56.640 "strip_size_kb": 0, 00:22:56.640 "state": "online", 00:22:56.640 "raid_level": "raid1", 00:22:56.640 "superblock": true, 00:22:56.640 "num_base_bdevs": 2, 00:22:56.640 "num_base_bdevs_discovered": 1, 00:22:56.640 "num_base_bdevs_operational": 1, 00:22:56.640 "base_bdevs_list": [ 00:22:56.640 { 00:22:56.640 "name": null, 00:22:56.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.640 "is_configured": false, 00:22:56.640 "data_offset": 256, 00:22:56.640 "data_size": 7936 00:22:56.640 }, 00:22:56.640 { 00:22:56.640 "name": "BaseBdev2", 00:22:56.640 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:56.640 "is_configured": true, 00:22:56.640 "data_offset": 256, 00:22:56.640 "data_size": 7936 00:22:56.640 } 00:22:56.640 ] 00:22:56.640 }' 00:22:56.640 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:56.640 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.207 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.207 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:57.208 "name": "raid_bdev1", 00:22:57.208 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:57.208 "strip_size_kb": 0, 00:22:57.208 "state": "online", 00:22:57.208 "raid_level": "raid1", 00:22:57.208 "superblock": true, 00:22:57.208 "num_base_bdevs": 2, 00:22:57.208 "num_base_bdevs_discovered": 1, 00:22:57.208 "num_base_bdevs_operational": 1, 00:22:57.208 "base_bdevs_list": [ 00:22:57.208 { 00:22:57.208 "name": null, 00:22:57.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.208 "is_configured": false, 00:22:57.208 "data_offset": 256, 00:22:57.208 "data_size": 7936 00:22:57.208 }, 00:22:57.208 { 00:22:57.208 "name": "BaseBdev2", 00:22:57.208 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:57.208 "is_configured": true, 00:22:57.208 "data_offset": 256, 00:22:57.208 "data_size": 7936 00:22:57.208 } 00:22:57.208 ] 00:22:57.208 }' 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:57.208 10:30:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.466 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:57.467 [2024-07-15 10:30:22.173174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.467 [2024-07-15 10:30:22.173266] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:57.467 [2024-07-15 10:30:22.173275] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:57.467 request: 00:22:57.467 { 00:22:57.467 "base_bdev": "BaseBdev1", 00:22:57.467 "raid_bdev": "raid_bdev1", 00:22:57.467 "method": "bdev_raid_add_base_bdev", 00:22:57.467 "req_id": 1 00:22:57.467 } 00:22:57.467 Got JSON-RPC error response 00:22:57.467 response: 00:22:57.467 { 00:22:57.467 "code": -22, 00:22:57.467 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:57.467 } 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.467 10:30:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.841 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:58.841 "name": "raid_bdev1", 00:22:58.841 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:58.841 "strip_size_kb": 0, 00:22:58.841 "state": "online", 00:22:58.842 "raid_level": "raid1", 00:22:58.842 "superblock": true, 00:22:58.842 "num_base_bdevs": 2, 00:22:58.842 "num_base_bdevs_discovered": 1, 00:22:58.842 "num_base_bdevs_operational": 1, 00:22:58.842 "base_bdevs_list": [ 00:22:58.842 { 00:22:58.842 "name": null, 00:22:58.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.842 "is_configured": false, 00:22:58.842 "data_offset": 256, 00:22:58.842 "data_size": 7936 00:22:58.842 }, 00:22:58.842 { 00:22:58.842 "name": "BaseBdev2", 00:22:58.842 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:58.842 "is_configured": true, 00:22:58.842 "data_offset": 256, 00:22:58.842 "data_size": 7936 00:22:58.842 } 00:22:58.842 ] 00:22:58.842 }' 00:22:58.842 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:58.842 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.100 10:30:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.370 "name": "raid_bdev1", 00:22:59.370 "uuid": "fda67747-6986-4cca-bc10-04af21e0f221", 00:22:59.370 "strip_size_kb": 0, 00:22:59.370 "state": "online", 00:22:59.370 "raid_level": "raid1", 00:22:59.370 "superblock": true, 00:22:59.370 "num_base_bdevs": 2, 00:22:59.370 "num_base_bdevs_discovered": 1, 00:22:59.370 "num_base_bdevs_operational": 1, 00:22:59.370 "base_bdevs_list": [ 00:22:59.370 { 00:22:59.370 "name": null, 00:22:59.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.370 "is_configured": false, 00:22:59.370 "data_offset": 256, 00:22:59.370 "data_size": 7936 00:22:59.370 }, 00:22:59.370 { 00:22:59.370 "name": "BaseBdev2", 00:22:59.370 "uuid": "e9fe7c55-7381-572f-b649-7bb7c0efd9a3", 00:22:59.370 "is_configured": true, 00:22:59.370 "data_offset": 256, 00:22:59.370 "data_size": 7936 00:22:59.370 } 00:22:59.370 ] 00:22:59.370 }' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 1895109 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 1895109 ']' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 1895109 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1895109 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1895109' 00:22:59.370 killing process with pid 1895109 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 1895109 00:22:59.370 Received shutdown signal, test time was about 60.000000 seconds 00:22:59.370 00:22:59.370 Latency(us) 00:22:59.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.370 =================================================================================================================== 00:22:59.370 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.370 [2024-07-15 10:30:24.113332] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:59.370 [2024-07-15 10:30:24.113398] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.370 [2024-07-15 10:30:24.113429] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.370 [2024-07-15 10:30:24.113437] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11e5cd0 name raid_bdev1, state offline 00:22:59.370 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 1895109 00:22:59.370 [2024-07-15 10:30:24.139649] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:59.674 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:22:59.674 00:22:59.674 real 0m25.584s 00:22:59.674 user 0m38.558s 00:22:59.674 sys 0m4.064s 00:22:59.674 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.674 10:30:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.674 ************************************ 00:22:59.674 END TEST raid_rebuild_test_sb_md_separate 00:22:59.674 ************************************ 00:22:59.674 10:30:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:59.674 10:30:24 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:22:59.674 10:30:24 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:59.674 10:30:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:59.674 10:30:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.674 10:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:59.674 ************************************ 00:22:59.674 START TEST raid_state_function_test_sb_md_interleaved 00:22:59.674 ************************************ 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=1900350 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:59.674 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1900350' 00:22:59.674 Process raid pid: 1900350 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 1900350 /var/tmp/spdk-raid.sock 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 1900350 ']' 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:59.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.675 10:30:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.675 [2024-07-15 10:30:24.444255] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:59.675 [2024-07-15 10:30:24.444299] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:59.934 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:59.934 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:59.934 [2024-07-15 10:30:24.535640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.934 [2024-07-15 10:30:24.609524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.934 [2024-07-15 10:30:24.663113] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.934 [2024-07-15 10:30:24.663130] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:00.502 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.502 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:00.502 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:00.760 [2024-07-15 10:30:25.386095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:00.760 [2024-07-15 10:30:25.386128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:00.760 [2024-07-15 10:30:25.386135] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:00.760 [2024-07-15 10:30:25.386142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.760 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.019 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.019 "name": "Existed_Raid", 00:23:01.019 "uuid": "9f9e080c-71ae-472e-821a-dcdea0e18f5f", 00:23:01.019 "strip_size_kb": 0, 00:23:01.019 "state": "configuring", 00:23:01.019 "raid_level": "raid1", 00:23:01.019 "superblock": true, 00:23:01.019 "num_base_bdevs": 2, 00:23:01.019 "num_base_bdevs_discovered": 0, 00:23:01.019 "num_base_bdevs_operational": 2, 00:23:01.019 "base_bdevs_list": [ 00:23:01.019 { 00:23:01.019 "name": "BaseBdev1", 00:23:01.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.019 "is_configured": false, 00:23:01.019 "data_offset": 0, 00:23:01.019 "data_size": 0 00:23:01.019 }, 00:23:01.019 { 00:23:01.019 "name": "BaseBdev2", 00:23:01.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.019 "is_configured": false, 00:23:01.019 "data_offset": 0, 00:23:01.019 "data_size": 0 00:23:01.019 } 00:23:01.019 ] 00:23:01.019 }' 00:23:01.019 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.019 10:30:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.277 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:01.536 [2024-07-15 10:30:26.192079] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:01.536 [2024-07-15 10:30:26.192097] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x110ef20 name Existed_Raid, state configuring 00:23:01.536 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:01.794 [2024-07-15 10:30:26.360525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:01.794 [2024-07-15 10:30:26.360547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:01.794 [2024-07-15 10:30:26.360553] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:01.794 [2024-07-15 10:30:26.360561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:01.794 [2024-07-15 10:30:26.541640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.794 BaseBdev1 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:01.794 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.052 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:02.310 [ 00:23:02.310 { 00:23:02.310 "name": "BaseBdev1", 00:23:02.310 "aliases": [ 00:23:02.310 "1272edcc-dd67-47f8-8fd6-390b2b41e694" 00:23:02.310 ], 00:23:02.310 "product_name": "Malloc disk", 00:23:02.310 "block_size": 4128, 00:23:02.310 "num_blocks": 8192, 00:23:02.310 "uuid": "1272edcc-dd67-47f8-8fd6-390b2b41e694", 00:23:02.310 "md_size": 32, 00:23:02.310 "md_interleave": true, 00:23:02.310 "dif_type": 0, 00:23:02.310 "assigned_rate_limits": { 00:23:02.310 "rw_ios_per_sec": 0, 00:23:02.310 "rw_mbytes_per_sec": 0, 00:23:02.310 "r_mbytes_per_sec": 0, 00:23:02.310 "w_mbytes_per_sec": 0 00:23:02.310 }, 00:23:02.310 "claimed": true, 00:23:02.310 "claim_type": "exclusive_write", 00:23:02.310 "zoned": false, 00:23:02.310 "supported_io_types": { 00:23:02.310 "read": true, 00:23:02.310 "write": true, 00:23:02.310 "unmap": true, 00:23:02.310 "flush": true, 00:23:02.310 "reset": true, 00:23:02.310 "nvme_admin": false, 00:23:02.310 "nvme_io": false, 00:23:02.310 "nvme_io_md": false, 00:23:02.310 "write_zeroes": true, 00:23:02.310 "zcopy": true, 00:23:02.310 "get_zone_info": false, 00:23:02.310 "zone_management": false, 00:23:02.310 "zone_append": false, 00:23:02.310 "compare": false, 00:23:02.310 "compare_and_write": false, 00:23:02.310 "abort": true, 00:23:02.310 "seek_hole": false, 00:23:02.310 "seek_data": false, 00:23:02.310 "copy": true, 00:23:02.310 "nvme_iov_md": false 00:23:02.310 }, 00:23:02.310 "memory_domains": [ 00:23:02.310 { 00:23:02.310 "dma_device_id": "system", 00:23:02.310 "dma_device_type": 1 00:23:02.310 }, 00:23:02.310 { 00:23:02.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.310 "dma_device_type": 2 00:23:02.310 } 00:23:02.310 ], 00:23:02.310 "driver_specific": {} 00:23:02.310 } 00:23:02.310 ] 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.310 10:30:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.310 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.310 "name": "Existed_Raid", 00:23:02.310 "uuid": "0f64ee44-6295-4262-9659-ec22d37220e9", 00:23:02.310 "strip_size_kb": 0, 00:23:02.310 "state": "configuring", 00:23:02.310 "raid_level": "raid1", 00:23:02.310 "superblock": true, 00:23:02.310 "num_base_bdevs": 2, 00:23:02.310 "num_base_bdevs_discovered": 1, 00:23:02.310 "num_base_bdevs_operational": 2, 00:23:02.310 "base_bdevs_list": [ 00:23:02.310 { 00:23:02.310 "name": "BaseBdev1", 00:23:02.310 "uuid": "1272edcc-dd67-47f8-8fd6-390b2b41e694", 00:23:02.310 "is_configured": true, 00:23:02.310 "data_offset": 256, 00:23:02.310 "data_size": 7936 00:23:02.310 }, 00:23:02.310 { 00:23:02.310 "name": "BaseBdev2", 00:23:02.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.310 "is_configured": false, 00:23:02.310 "data_offset": 0, 00:23:02.310 "data_size": 0 00:23:02.310 } 00:23:02.310 ] 00:23:02.310 }' 00:23:02.310 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.310 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.877 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:02.877 [2024-07-15 10:30:27.648487] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:02.877 [2024-07-15 10:30:27.648511] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x110e810 name Existed_Raid, state configuring 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:23:03.135 [2024-07-15 10:30:27.820979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.135 [2024-07-15 10:30:27.822024] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:03.135 [2024-07-15 10:30:27.822048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:03.135 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.136 10:30:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.394 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.394 "name": "Existed_Raid", 00:23:03.394 "uuid": "283061a1-dbb2-470d-bf65-47b283c31c1b", 00:23:03.394 "strip_size_kb": 0, 00:23:03.394 "state": "configuring", 00:23:03.394 "raid_level": "raid1", 00:23:03.394 "superblock": true, 00:23:03.394 "num_base_bdevs": 2, 00:23:03.394 "num_base_bdevs_discovered": 1, 00:23:03.394 "num_base_bdevs_operational": 2, 00:23:03.394 "base_bdevs_list": [ 00:23:03.394 { 00:23:03.394 "name": "BaseBdev1", 00:23:03.394 "uuid": "1272edcc-dd67-47f8-8fd6-390b2b41e694", 00:23:03.394 "is_configured": true, 00:23:03.394 "data_offset": 256, 00:23:03.394 "data_size": 7936 00:23:03.394 }, 00:23:03.394 { 00:23:03.394 "name": "BaseBdev2", 00:23:03.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.394 "is_configured": false, 00:23:03.394 "data_offset": 0, 00:23:03.394 "data_size": 0 00:23:03.394 } 00:23:03.394 ] 00:23:03.394 }' 00:23:03.394 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.394 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:03.962 [2024-07-15 10:30:28.605850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.962 [2024-07-15 10:30:28.605949] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1190710 00:23:03.962 [2024-07-15 10:30:28.605977] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:03.962 [2024-07-15 10:30:28.606016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a1f60 00:23:03.962 [2024-07-15 10:30:28.606069] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1190710 00:23:03.962 [2024-07-15 10:30:28.606076] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1190710 00:23:03.962 [2024-07-15 10:30:28.606112] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.962 BaseBdev2 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:03.962 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:04.221 [ 00:23:04.221 { 00:23:04.221 "name": "BaseBdev2", 00:23:04.221 "aliases": [ 00:23:04.221 "d8a4e577-32f3-4745-85bb-18a11b658d73" 00:23:04.221 ], 00:23:04.221 "product_name": "Malloc disk", 00:23:04.221 "block_size": 4128, 00:23:04.221 "num_blocks": 8192, 00:23:04.221 "uuid": "d8a4e577-32f3-4745-85bb-18a11b658d73", 00:23:04.221 "md_size": 32, 00:23:04.221 "md_interleave": true, 00:23:04.221 "dif_type": 0, 00:23:04.221 "assigned_rate_limits": { 00:23:04.221 "rw_ios_per_sec": 0, 00:23:04.221 "rw_mbytes_per_sec": 0, 00:23:04.221 "r_mbytes_per_sec": 0, 00:23:04.221 "w_mbytes_per_sec": 0 00:23:04.221 }, 00:23:04.221 "claimed": true, 00:23:04.221 "claim_type": "exclusive_write", 00:23:04.221 "zoned": false, 00:23:04.221 "supported_io_types": { 00:23:04.221 "read": true, 00:23:04.221 "write": true, 00:23:04.221 "unmap": true, 00:23:04.221 "flush": true, 00:23:04.221 "reset": true, 00:23:04.221 "nvme_admin": false, 00:23:04.221 "nvme_io": false, 00:23:04.221 "nvme_io_md": false, 00:23:04.221 "write_zeroes": true, 00:23:04.221 "zcopy": true, 00:23:04.221 "get_zone_info": false, 00:23:04.221 "zone_management": false, 00:23:04.221 "zone_append": false, 00:23:04.221 "compare": false, 00:23:04.221 "compare_and_write": false, 00:23:04.221 "abort": true, 00:23:04.221 "seek_hole": false, 00:23:04.221 "seek_data": false, 00:23:04.221 "copy": true, 00:23:04.221 "nvme_iov_md": false 00:23:04.221 }, 00:23:04.221 "memory_domains": [ 00:23:04.221 { 00:23:04.221 "dma_device_id": "system", 00:23:04.221 "dma_device_type": 1 00:23:04.221 }, 00:23:04.221 { 00:23:04.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.221 "dma_device_type": 2 00:23:04.221 } 00:23:04.221 ], 00:23:04.221 "driver_specific": {} 00:23:04.221 } 00:23:04.221 ] 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.221 10:30:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.480 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.480 "name": "Existed_Raid", 00:23:04.480 "uuid": "283061a1-dbb2-470d-bf65-47b283c31c1b", 00:23:04.480 "strip_size_kb": 0, 00:23:04.480 "state": "online", 00:23:04.480 "raid_level": "raid1", 00:23:04.480 "superblock": true, 00:23:04.480 "num_base_bdevs": 2, 00:23:04.480 "num_base_bdevs_discovered": 2, 00:23:04.480 "num_base_bdevs_operational": 2, 00:23:04.480 "base_bdevs_list": [ 00:23:04.480 { 00:23:04.480 "name": "BaseBdev1", 00:23:04.480 "uuid": "1272edcc-dd67-47f8-8fd6-390b2b41e694", 00:23:04.480 "is_configured": true, 00:23:04.480 "data_offset": 256, 00:23:04.480 "data_size": 7936 00:23:04.480 }, 00:23:04.480 { 00:23:04.480 "name": "BaseBdev2", 00:23:04.480 "uuid": "d8a4e577-32f3-4745-85bb-18a11b658d73", 00:23:04.480 "is_configured": true, 00:23:04.480 "data_offset": 256, 00:23:04.480 "data_size": 7936 00:23:04.480 } 00:23:04.480 ] 00:23:04.480 }' 00:23:04.480 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.480 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:05.047 [2024-07-15 10:30:29.773047] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:05.047 "name": "Existed_Raid", 00:23:05.047 "aliases": [ 00:23:05.047 "283061a1-dbb2-470d-bf65-47b283c31c1b" 00:23:05.047 ], 00:23:05.047 "product_name": "Raid Volume", 00:23:05.047 "block_size": 4128, 00:23:05.047 "num_blocks": 7936, 00:23:05.047 "uuid": "283061a1-dbb2-470d-bf65-47b283c31c1b", 00:23:05.047 "md_size": 32, 00:23:05.047 "md_interleave": true, 00:23:05.047 "dif_type": 0, 00:23:05.047 "assigned_rate_limits": { 00:23:05.047 "rw_ios_per_sec": 0, 00:23:05.047 "rw_mbytes_per_sec": 0, 00:23:05.047 "r_mbytes_per_sec": 0, 00:23:05.047 "w_mbytes_per_sec": 0 00:23:05.047 }, 00:23:05.047 "claimed": false, 00:23:05.047 "zoned": false, 00:23:05.047 "supported_io_types": { 00:23:05.047 "read": true, 00:23:05.047 "write": true, 00:23:05.047 "unmap": false, 00:23:05.047 "flush": false, 00:23:05.047 "reset": true, 00:23:05.047 "nvme_admin": false, 00:23:05.047 "nvme_io": false, 00:23:05.047 "nvme_io_md": false, 00:23:05.047 "write_zeroes": true, 00:23:05.047 "zcopy": false, 00:23:05.047 "get_zone_info": false, 00:23:05.047 "zone_management": false, 00:23:05.047 "zone_append": false, 00:23:05.047 "compare": false, 00:23:05.047 "compare_and_write": false, 00:23:05.047 "abort": false, 00:23:05.047 "seek_hole": false, 00:23:05.047 "seek_data": false, 00:23:05.047 "copy": false, 00:23:05.047 "nvme_iov_md": false 00:23:05.047 }, 00:23:05.047 "memory_domains": [ 00:23:05.047 { 00:23:05.047 "dma_device_id": "system", 00:23:05.047 "dma_device_type": 1 00:23:05.047 }, 00:23:05.047 { 00:23:05.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.047 "dma_device_type": 2 00:23:05.047 }, 00:23:05.047 { 00:23:05.047 "dma_device_id": "system", 00:23:05.047 "dma_device_type": 1 00:23:05.047 }, 00:23:05.047 { 00:23:05.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.047 "dma_device_type": 2 00:23:05.047 } 00:23:05.047 ], 00:23:05.047 "driver_specific": { 00:23:05.047 "raid": { 00:23:05.047 "uuid": "283061a1-dbb2-470d-bf65-47b283c31c1b", 00:23:05.047 "strip_size_kb": 0, 00:23:05.047 "state": "online", 00:23:05.047 "raid_level": "raid1", 00:23:05.047 "superblock": true, 00:23:05.047 "num_base_bdevs": 2, 00:23:05.047 "num_base_bdevs_discovered": 2, 00:23:05.047 "num_base_bdevs_operational": 2, 00:23:05.047 "base_bdevs_list": [ 00:23:05.047 { 00:23:05.047 "name": "BaseBdev1", 00:23:05.047 "uuid": "1272edcc-dd67-47f8-8fd6-390b2b41e694", 00:23:05.047 "is_configured": true, 00:23:05.047 "data_offset": 256, 00:23:05.047 "data_size": 7936 00:23:05.047 }, 00:23:05.047 { 00:23:05.047 "name": "BaseBdev2", 00:23:05.047 "uuid": "d8a4e577-32f3-4745-85bb-18a11b658d73", 00:23:05.047 "is_configured": true, 00:23:05.047 "data_offset": 256, 00:23:05.047 "data_size": 7936 00:23:05.047 } 00:23:05.047 ] 00:23:05.047 } 00:23:05.047 } 00:23:05.047 }' 00:23:05.047 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:05.306 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:05.306 BaseBdev2' 00:23:05.306 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:05.306 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:05.306 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:05.306 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:05.306 "name": "BaseBdev1", 00:23:05.306 "aliases": [ 00:23:05.306 "1272edcc-dd67-47f8-8fd6-390b2b41e694" 00:23:05.306 ], 00:23:05.306 "product_name": "Malloc disk", 00:23:05.306 "block_size": 4128, 00:23:05.306 "num_blocks": 8192, 00:23:05.306 "uuid": "1272edcc-dd67-47f8-8fd6-390b2b41e694", 00:23:05.306 "md_size": 32, 00:23:05.306 "md_interleave": true, 00:23:05.306 "dif_type": 0, 00:23:05.306 "assigned_rate_limits": { 00:23:05.306 "rw_ios_per_sec": 0, 00:23:05.306 "rw_mbytes_per_sec": 0, 00:23:05.306 "r_mbytes_per_sec": 0, 00:23:05.306 "w_mbytes_per_sec": 0 00:23:05.306 }, 00:23:05.306 "claimed": true, 00:23:05.306 "claim_type": "exclusive_write", 00:23:05.306 "zoned": false, 00:23:05.306 "supported_io_types": { 00:23:05.306 "read": true, 00:23:05.306 "write": true, 00:23:05.306 "unmap": true, 00:23:05.306 "flush": true, 00:23:05.306 "reset": true, 00:23:05.306 "nvme_admin": false, 00:23:05.306 "nvme_io": false, 00:23:05.306 "nvme_io_md": false, 00:23:05.306 "write_zeroes": true, 00:23:05.306 "zcopy": true, 00:23:05.306 "get_zone_info": false, 00:23:05.306 "zone_management": false, 00:23:05.306 "zone_append": false, 00:23:05.306 "compare": false, 00:23:05.306 "compare_and_write": false, 00:23:05.306 "abort": true, 00:23:05.306 "seek_hole": false, 00:23:05.306 "seek_data": false, 00:23:05.306 "copy": true, 00:23:05.306 "nvme_iov_md": false 00:23:05.306 }, 00:23:05.306 "memory_domains": [ 00:23:05.306 { 00:23:05.306 "dma_device_id": "system", 00:23:05.306 "dma_device_type": 1 00:23:05.307 }, 00:23:05.307 { 00:23:05.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.307 "dma_device_type": 2 00:23:05.307 } 00:23:05.307 ], 00:23:05.307 "driver_specific": {} 00:23:05.307 }' 00:23:05.307 10:30:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:05.307 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:05.307 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:05.307 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:05.565 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:05.824 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:05.824 "name": "BaseBdev2", 00:23:05.824 "aliases": [ 00:23:05.824 "d8a4e577-32f3-4745-85bb-18a11b658d73" 00:23:05.824 ], 00:23:05.824 "product_name": "Malloc disk", 00:23:05.824 "block_size": 4128, 00:23:05.824 "num_blocks": 8192, 00:23:05.824 "uuid": "d8a4e577-32f3-4745-85bb-18a11b658d73", 00:23:05.824 "md_size": 32, 00:23:05.824 "md_interleave": true, 00:23:05.824 "dif_type": 0, 00:23:05.824 "assigned_rate_limits": { 00:23:05.824 "rw_ios_per_sec": 0, 00:23:05.824 "rw_mbytes_per_sec": 0, 00:23:05.824 "r_mbytes_per_sec": 0, 00:23:05.824 "w_mbytes_per_sec": 0 00:23:05.824 }, 00:23:05.824 "claimed": true, 00:23:05.824 "claim_type": "exclusive_write", 00:23:05.824 "zoned": false, 00:23:05.824 "supported_io_types": { 00:23:05.824 "read": true, 00:23:05.824 "write": true, 00:23:05.824 "unmap": true, 00:23:05.824 "flush": true, 00:23:05.824 "reset": true, 00:23:05.824 "nvme_admin": false, 00:23:05.824 "nvme_io": false, 00:23:05.824 "nvme_io_md": false, 00:23:05.824 "write_zeroes": true, 00:23:05.824 "zcopy": true, 00:23:05.824 "get_zone_info": false, 00:23:05.824 "zone_management": false, 00:23:05.824 "zone_append": false, 00:23:05.824 "compare": false, 00:23:05.824 "compare_and_write": false, 00:23:05.824 "abort": true, 00:23:05.824 "seek_hole": false, 00:23:05.824 "seek_data": false, 00:23:05.824 "copy": true, 00:23:05.824 "nvme_iov_md": false 00:23:05.824 }, 00:23:05.824 "memory_domains": [ 00:23:05.824 { 00:23:05.824 "dma_device_id": "system", 00:23:05.824 "dma_device_type": 1 00:23:05.824 }, 00:23:05.824 { 00:23:05.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.824 "dma_device_type": 2 00:23:05.824 } 00:23:05.824 ], 00:23:05.824 "driver_specific": {} 00:23:05.824 }' 00:23:05.824 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:05.824 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:05.824 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:05.824 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:05.824 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:06.082 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:06.349 [2024-07-15 10:30:30.919853] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.349 10:30:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.349 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.349 "name": "Existed_Raid", 00:23:06.349 "uuid": "283061a1-dbb2-470d-bf65-47b283c31c1b", 00:23:06.349 "strip_size_kb": 0, 00:23:06.349 "state": "online", 00:23:06.349 "raid_level": "raid1", 00:23:06.349 "superblock": true, 00:23:06.349 "num_base_bdevs": 2, 00:23:06.349 "num_base_bdevs_discovered": 1, 00:23:06.349 "num_base_bdevs_operational": 1, 00:23:06.349 "base_bdevs_list": [ 00:23:06.349 { 00:23:06.349 "name": null, 00:23:06.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.349 "is_configured": false, 00:23:06.349 "data_offset": 256, 00:23:06.349 "data_size": 7936 00:23:06.349 }, 00:23:06.349 { 00:23:06.349 "name": "BaseBdev2", 00:23:06.349 "uuid": "d8a4e577-32f3-4745-85bb-18a11b658d73", 00:23:06.349 "is_configured": true, 00:23:06.349 "data_offset": 256, 00:23:06.349 "data_size": 7936 00:23:06.349 } 00:23:06.349 ] 00:23:06.349 }' 00:23:06.349 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.349 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.913 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:06.913 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:06.913 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.913 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:07.171 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:07.171 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:07.171 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:07.171 [2024-07-15 10:30:31.931366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:07.171 [2024-07-15 10:30:31.931421] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:07.171 [2024-07-15 10:30:31.941565] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.171 [2024-07-15 10:30:31.941589] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.171 [2024-07-15 10:30:31.941596] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1190710 name Existed_Raid, state offline 00:23:07.171 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:07.171 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:07.171 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.429 10:30:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 1900350 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 1900350 ']' 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 1900350 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1900350 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1900350' 00:23:07.429 killing process with pid 1900350 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 1900350 00:23:07.429 [2024-07-15 10:30:32.176548] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:07.429 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 1900350 00:23:07.429 [2024-07-15 10:30:32.177327] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:07.715 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:23:07.715 00:23:07.715 real 0m7.962s 00:23:07.715 user 0m14.011s 00:23:07.715 sys 0m1.578s 00:23:07.715 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:07.715 10:30:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.715 ************************************ 00:23:07.715 END TEST raid_state_function_test_sb_md_interleaved 00:23:07.715 ************************************ 00:23:07.715 10:30:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:07.715 10:30:32 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:07.715 10:30:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:07.715 10:30:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.715 10:30:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:07.715 ************************************ 00:23:07.715 START TEST raid_superblock_test_md_interleaved 00:23:07.715 ************************************ 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=1901905 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 1901905 /var/tmp/spdk-raid.sock 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 1901905 ']' 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:07.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.715 10:30:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.715 [2024-07-15 10:30:32.494479] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:07.715 [2024-07-15 10:30:32.494524] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901905 ] 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:07.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.974 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:07.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:07.975 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:07.975 [2024-07-15 10:30:32.584595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.975 [2024-07-15 10:30:32.664834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.975 [2024-07-15 10:30:32.720548] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.975 [2024-07-15 10:30:32.720573] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:08.542 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:08.799 malloc1 00:23:08.799 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:09.057 [2024-07-15 10:30:33.604943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:09.057 [2024-07-15 10:30:33.604982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.057 [2024-07-15 10:30:33.604994] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25bd310 00:23:09.057 [2024-07-15 10:30:33.605002] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.057 [2024-07-15 10:30:33.605976] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.057 [2024-07-15 10:30:33.605997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:09.057 pt1 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:09.057 malloc2 00:23:09.057 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:09.315 [2024-07-15 10:30:33.945544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:09.315 [2024-07-15 10:30:33.945571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.315 [2024-07-15 10:30:33.945582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b4950 00:23:09.315 [2024-07-15 10:30:33.945606] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.315 [2024-07-15 10:30:33.946410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.315 [2024-07-15 10:30:33.946429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:09.315 pt2 00:23:09.315 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:09.315 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:09.315 10:30:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:23:09.573 [2024-07-15 10:30:34.113983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:09.573 [2024-07-15 10:30:34.114705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:09.573 [2024-07-15 10:30:34.114793] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25bdae0 00:23:09.573 [2024-07-15 10:30:34.114801] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:09.573 [2024-07-15 10:30:34.114841] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x241ff50 00:23:09.573 [2024-07-15 10:30:34.114894] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25bdae0 00:23:09.573 [2024-07-15 10:30:34.114911] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25bdae0 00:23:09.573 [2024-07-15 10:30:34.114946] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.573 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:09.573 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.573 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.574 "name": "raid_bdev1", 00:23:09.574 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:09.574 "strip_size_kb": 0, 00:23:09.574 "state": "online", 00:23:09.574 "raid_level": "raid1", 00:23:09.574 "superblock": true, 00:23:09.574 "num_base_bdevs": 2, 00:23:09.574 "num_base_bdevs_discovered": 2, 00:23:09.574 "num_base_bdevs_operational": 2, 00:23:09.574 "base_bdevs_list": [ 00:23:09.574 { 00:23:09.574 "name": "pt1", 00:23:09.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.574 "is_configured": true, 00:23:09.574 "data_offset": 256, 00:23:09.574 "data_size": 7936 00:23:09.574 }, 00:23:09.574 { 00:23:09.574 "name": "pt2", 00:23:09.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.574 "is_configured": true, 00:23:09.574 "data_offset": 256, 00:23:09.574 "data_size": 7936 00:23:09.574 } 00:23:09.574 ] 00:23:09.574 }' 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.574 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:10.140 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:10.399 [2024-07-15 10:30:34.968363] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.399 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:10.399 "name": "raid_bdev1", 00:23:10.399 "aliases": [ 00:23:10.399 "d5873836-57fd-4c31-9198-bd8fe198f174" 00:23:10.399 ], 00:23:10.399 "product_name": "Raid Volume", 00:23:10.399 "block_size": 4128, 00:23:10.399 "num_blocks": 7936, 00:23:10.399 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:10.399 "md_size": 32, 00:23:10.399 "md_interleave": true, 00:23:10.399 "dif_type": 0, 00:23:10.399 "assigned_rate_limits": { 00:23:10.399 "rw_ios_per_sec": 0, 00:23:10.399 "rw_mbytes_per_sec": 0, 00:23:10.399 "r_mbytes_per_sec": 0, 00:23:10.399 "w_mbytes_per_sec": 0 00:23:10.399 }, 00:23:10.399 "claimed": false, 00:23:10.399 "zoned": false, 00:23:10.399 "supported_io_types": { 00:23:10.399 "read": true, 00:23:10.399 "write": true, 00:23:10.399 "unmap": false, 00:23:10.399 "flush": false, 00:23:10.399 "reset": true, 00:23:10.399 "nvme_admin": false, 00:23:10.399 "nvme_io": false, 00:23:10.399 "nvme_io_md": false, 00:23:10.399 "write_zeroes": true, 00:23:10.399 "zcopy": false, 00:23:10.399 "get_zone_info": false, 00:23:10.399 "zone_management": false, 00:23:10.399 "zone_append": false, 00:23:10.399 "compare": false, 00:23:10.399 "compare_and_write": false, 00:23:10.399 "abort": false, 00:23:10.399 "seek_hole": false, 00:23:10.399 "seek_data": false, 00:23:10.399 "copy": false, 00:23:10.399 "nvme_iov_md": false 00:23:10.399 }, 00:23:10.399 "memory_domains": [ 00:23:10.399 { 00:23:10.399 "dma_device_id": "system", 00:23:10.399 "dma_device_type": 1 00:23:10.399 }, 00:23:10.399 { 00:23:10.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.399 "dma_device_type": 2 00:23:10.399 }, 00:23:10.399 { 00:23:10.399 "dma_device_id": "system", 00:23:10.399 "dma_device_type": 1 00:23:10.399 }, 00:23:10.399 { 00:23:10.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.399 "dma_device_type": 2 00:23:10.399 } 00:23:10.399 ], 00:23:10.399 "driver_specific": { 00:23:10.399 "raid": { 00:23:10.399 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:10.399 "strip_size_kb": 0, 00:23:10.399 "state": "online", 00:23:10.399 "raid_level": "raid1", 00:23:10.399 "superblock": true, 00:23:10.399 "num_base_bdevs": 2, 00:23:10.399 "num_base_bdevs_discovered": 2, 00:23:10.399 "num_base_bdevs_operational": 2, 00:23:10.399 "base_bdevs_list": [ 00:23:10.399 { 00:23:10.399 "name": "pt1", 00:23:10.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.399 "is_configured": true, 00:23:10.399 "data_offset": 256, 00:23:10.399 "data_size": 7936 00:23:10.399 }, 00:23:10.399 { 00:23:10.399 "name": "pt2", 00:23:10.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.399 "is_configured": true, 00:23:10.399 "data_offset": 256, 00:23:10.399 "data_size": 7936 00:23:10.399 } 00:23:10.399 ] 00:23:10.399 } 00:23:10.399 } 00:23:10.399 }' 00:23:10.399 10:30:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:10.399 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:10.399 pt2' 00:23:10.399 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.399 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:10.399 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.657 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.657 "name": "pt1", 00:23:10.657 "aliases": [ 00:23:10.657 "00000000-0000-0000-0000-000000000001" 00:23:10.657 ], 00:23:10.657 "product_name": "passthru", 00:23:10.657 "block_size": 4128, 00:23:10.657 "num_blocks": 8192, 00:23:10.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.657 "md_size": 32, 00:23:10.657 "md_interleave": true, 00:23:10.657 "dif_type": 0, 00:23:10.657 "assigned_rate_limits": { 00:23:10.657 "rw_ios_per_sec": 0, 00:23:10.657 "rw_mbytes_per_sec": 0, 00:23:10.657 "r_mbytes_per_sec": 0, 00:23:10.657 "w_mbytes_per_sec": 0 00:23:10.657 }, 00:23:10.657 "claimed": true, 00:23:10.657 "claim_type": "exclusive_write", 00:23:10.657 "zoned": false, 00:23:10.657 "supported_io_types": { 00:23:10.657 "read": true, 00:23:10.657 "write": true, 00:23:10.657 "unmap": true, 00:23:10.657 "flush": true, 00:23:10.657 "reset": true, 00:23:10.657 "nvme_admin": false, 00:23:10.658 "nvme_io": false, 00:23:10.658 "nvme_io_md": false, 00:23:10.658 "write_zeroes": true, 00:23:10.658 "zcopy": true, 00:23:10.658 "get_zone_info": false, 00:23:10.658 "zone_management": false, 00:23:10.658 "zone_append": false, 00:23:10.658 "compare": false, 00:23:10.658 "compare_and_write": false, 00:23:10.658 "abort": true, 00:23:10.658 "seek_hole": false, 00:23:10.658 "seek_data": false, 00:23:10.658 "copy": true, 00:23:10.658 "nvme_iov_md": false 00:23:10.658 }, 00:23:10.658 "memory_domains": [ 00:23:10.658 { 00:23:10.658 "dma_device_id": "system", 00:23:10.658 "dma_device_type": 1 00:23:10.658 }, 00:23:10.658 { 00:23:10.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.658 "dma_device_type": 2 00:23:10.658 } 00:23:10.658 ], 00:23:10.658 "driver_specific": { 00:23:10.658 "passthru": { 00:23:10.658 "name": "pt1", 00:23:10.658 "base_bdev_name": "malloc1" 00:23:10.658 } 00:23:10.658 } 00:23:10.658 }' 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:10.658 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.916 "name": "pt2", 00:23:10.916 "aliases": [ 00:23:10.916 "00000000-0000-0000-0000-000000000002" 00:23:10.916 ], 00:23:10.916 "product_name": "passthru", 00:23:10.916 "block_size": 4128, 00:23:10.916 "num_blocks": 8192, 00:23:10.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.916 "md_size": 32, 00:23:10.916 "md_interleave": true, 00:23:10.916 "dif_type": 0, 00:23:10.916 "assigned_rate_limits": { 00:23:10.916 "rw_ios_per_sec": 0, 00:23:10.916 "rw_mbytes_per_sec": 0, 00:23:10.916 "r_mbytes_per_sec": 0, 00:23:10.916 "w_mbytes_per_sec": 0 00:23:10.916 }, 00:23:10.916 "claimed": true, 00:23:10.916 "claim_type": "exclusive_write", 00:23:10.916 "zoned": false, 00:23:10.916 "supported_io_types": { 00:23:10.916 "read": true, 00:23:10.916 "write": true, 00:23:10.916 "unmap": true, 00:23:10.916 "flush": true, 00:23:10.916 "reset": true, 00:23:10.916 "nvme_admin": false, 00:23:10.916 "nvme_io": false, 00:23:10.916 "nvme_io_md": false, 00:23:10.916 "write_zeroes": true, 00:23:10.916 "zcopy": true, 00:23:10.916 "get_zone_info": false, 00:23:10.916 "zone_management": false, 00:23:10.916 "zone_append": false, 00:23:10.916 "compare": false, 00:23:10.916 "compare_and_write": false, 00:23:10.916 "abort": true, 00:23:10.916 "seek_hole": false, 00:23:10.916 "seek_data": false, 00:23:10.916 "copy": true, 00:23:10.916 "nvme_iov_md": false 00:23:10.916 }, 00:23:10.916 "memory_domains": [ 00:23:10.916 { 00:23:10.916 "dma_device_id": "system", 00:23:10.916 "dma_device_type": 1 00:23:10.916 }, 00:23:10.916 { 00:23:10.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.916 "dma_device_type": 2 00:23:10.916 } 00:23:10.916 ], 00:23:10.916 "driver_specific": { 00:23:10.916 "passthru": { 00:23:10.916 "name": "pt2", 00:23:10.916 "base_bdev_name": "malloc2" 00:23:10.916 } 00:23:10.916 } 00:23:10.916 }' 00:23:10.916 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:11.174 10:30:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:11.432 [2024-07-15 10:30:36.111292] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.433 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d5873836-57fd-4c31-9198-bd8fe198f174 00:23:11.433 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z d5873836-57fd-4c31-9198-bd8fe198f174 ']' 00:23:11.433 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:11.690 [2024-07-15 10:30:36.271555] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.690 [2024-07-15 10:30:36.271570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:11.690 [2024-07-15 10:30:36.271609] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.690 [2024-07-15 10:30:36.271646] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.690 [2024-07-15 10:30:36.271654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25bdae0 name raid_bdev1, state offline 00:23:11.690 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.690 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:11.690 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:11.690 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:11.690 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:11.690 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:11.948 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:11.948 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:12.207 10:30:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:23:12.518 [2024-07-15 10:30:37.117717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:12.518 [2024-07-15 10:30:37.118688] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:12.518 [2024-07-15 10:30:37.118731] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:12.518 [2024-07-15 10:30:37.118761] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:12.518 [2024-07-15 10:30:37.118790] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:12.518 [2024-07-15 10:30:37.118796] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b37e0 name raid_bdev1, state configuring 00:23:12.518 request: 00:23:12.518 { 00:23:12.518 "name": "raid_bdev1", 00:23:12.518 "raid_level": "raid1", 00:23:12.518 "base_bdevs": [ 00:23:12.518 "malloc1", 00:23:12.518 "malloc2" 00:23:12.518 ], 00:23:12.518 "superblock": false, 00:23:12.518 "method": "bdev_raid_create", 00:23:12.518 "req_id": 1 00:23:12.518 } 00:23:12.518 Got JSON-RPC error response 00:23:12.518 response: 00:23:12.518 { 00:23:12.518 "code": -17, 00:23:12.518 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:12.518 } 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:12.518 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:12.777 [2024-07-15 10:30:37.450560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:12.777 [2024-07-15 10:30:37.450597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.777 [2024-07-15 10:30:37.450625] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b6d90 00:23:12.777 [2024-07-15 10:30:37.450635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.777 [2024-07-15 10:30:37.451672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.777 [2024-07-15 10:30:37.451695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:12.777 [2024-07-15 10:30:37.451731] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:12.777 [2024-07-15 10:30:37.451750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:12.777 pt1 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.777 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.035 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.035 "name": "raid_bdev1", 00:23:13.035 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:13.035 "strip_size_kb": 0, 00:23:13.035 "state": "configuring", 00:23:13.035 "raid_level": "raid1", 00:23:13.035 "superblock": true, 00:23:13.035 "num_base_bdevs": 2, 00:23:13.035 "num_base_bdevs_discovered": 1, 00:23:13.035 "num_base_bdevs_operational": 2, 00:23:13.035 "base_bdevs_list": [ 00:23:13.035 { 00:23:13.035 "name": "pt1", 00:23:13.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:13.035 "is_configured": true, 00:23:13.035 "data_offset": 256, 00:23:13.035 "data_size": 7936 00:23:13.035 }, 00:23:13.035 { 00:23:13.035 "name": null, 00:23:13.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.035 "is_configured": false, 00:23:13.035 "data_offset": 256, 00:23:13.035 "data_size": 7936 00:23:13.035 } 00:23:13.035 ] 00:23:13.035 }' 00:23:13.035 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.035 10:30:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:13.601 [2024-07-15 10:30:38.292727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:13.601 [2024-07-15 10:30:38.292764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.601 [2024-07-15 10:30:38.292793] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b5080 00:23:13.601 [2024-07-15 10:30:38.292801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.601 [2024-07-15 10:30:38.292944] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.601 [2024-07-15 10:30:38.292955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:13.601 [2024-07-15 10:30:38.292985] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:13.601 [2024-07-15 10:30:38.292997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:13.601 [2024-07-15 10:30:38.293054] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25bd6e0 00:23:13.601 [2024-07-15 10:30:38.293060] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:13.601 [2024-07-15 10:30:38.293101] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b8d90 00:23:13.601 [2024-07-15 10:30:38.293155] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25bd6e0 00:23:13.601 [2024-07-15 10:30:38.293162] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25bd6e0 00:23:13.601 [2024-07-15 10:30:38.293200] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.601 pt2 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.601 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.602 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.602 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.602 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.860 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.860 "name": "raid_bdev1", 00:23:13.860 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:13.860 "strip_size_kb": 0, 00:23:13.860 "state": "online", 00:23:13.860 "raid_level": "raid1", 00:23:13.860 "superblock": true, 00:23:13.860 "num_base_bdevs": 2, 00:23:13.860 "num_base_bdevs_discovered": 2, 00:23:13.860 "num_base_bdevs_operational": 2, 00:23:13.860 "base_bdevs_list": [ 00:23:13.860 { 00:23:13.860 "name": "pt1", 00:23:13.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:13.860 "is_configured": true, 00:23:13.860 "data_offset": 256, 00:23:13.860 "data_size": 7936 00:23:13.860 }, 00:23:13.860 { 00:23:13.860 "name": "pt2", 00:23:13.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:13.860 "is_configured": true, 00:23:13.860 "data_offset": 256, 00:23:13.860 "data_size": 7936 00:23:13.860 } 00:23:13.860 ] 00:23:13.860 }' 00:23:13.860 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.860 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:14.450 10:30:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:14.450 [2024-07-15 10:30:39.131065] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.450 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:14.450 "name": "raid_bdev1", 00:23:14.450 "aliases": [ 00:23:14.450 "d5873836-57fd-4c31-9198-bd8fe198f174" 00:23:14.450 ], 00:23:14.450 "product_name": "Raid Volume", 00:23:14.450 "block_size": 4128, 00:23:14.450 "num_blocks": 7936, 00:23:14.450 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:14.450 "md_size": 32, 00:23:14.450 "md_interleave": true, 00:23:14.450 "dif_type": 0, 00:23:14.450 "assigned_rate_limits": { 00:23:14.450 "rw_ios_per_sec": 0, 00:23:14.450 "rw_mbytes_per_sec": 0, 00:23:14.450 "r_mbytes_per_sec": 0, 00:23:14.450 "w_mbytes_per_sec": 0 00:23:14.450 }, 00:23:14.450 "claimed": false, 00:23:14.450 "zoned": false, 00:23:14.450 "supported_io_types": { 00:23:14.450 "read": true, 00:23:14.450 "write": true, 00:23:14.450 "unmap": false, 00:23:14.450 "flush": false, 00:23:14.450 "reset": true, 00:23:14.450 "nvme_admin": false, 00:23:14.450 "nvme_io": false, 00:23:14.450 "nvme_io_md": false, 00:23:14.450 "write_zeroes": true, 00:23:14.450 "zcopy": false, 00:23:14.450 "get_zone_info": false, 00:23:14.450 "zone_management": false, 00:23:14.450 "zone_append": false, 00:23:14.450 "compare": false, 00:23:14.450 "compare_and_write": false, 00:23:14.450 "abort": false, 00:23:14.450 "seek_hole": false, 00:23:14.450 "seek_data": false, 00:23:14.450 "copy": false, 00:23:14.450 "nvme_iov_md": false 00:23:14.450 }, 00:23:14.450 "memory_domains": [ 00:23:14.450 { 00:23:14.450 "dma_device_id": "system", 00:23:14.450 "dma_device_type": 1 00:23:14.450 }, 00:23:14.450 { 00:23:14.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.450 "dma_device_type": 2 00:23:14.450 }, 00:23:14.450 { 00:23:14.450 "dma_device_id": "system", 00:23:14.450 "dma_device_type": 1 00:23:14.450 }, 00:23:14.450 { 00:23:14.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.450 "dma_device_type": 2 00:23:14.450 } 00:23:14.450 ], 00:23:14.450 "driver_specific": { 00:23:14.450 "raid": { 00:23:14.450 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:14.450 "strip_size_kb": 0, 00:23:14.450 "state": "online", 00:23:14.450 "raid_level": "raid1", 00:23:14.450 "superblock": true, 00:23:14.450 "num_base_bdevs": 2, 00:23:14.450 "num_base_bdevs_discovered": 2, 00:23:14.450 "num_base_bdevs_operational": 2, 00:23:14.450 "base_bdevs_list": [ 00:23:14.450 { 00:23:14.450 "name": "pt1", 00:23:14.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:14.450 "is_configured": true, 00:23:14.450 "data_offset": 256, 00:23:14.450 "data_size": 7936 00:23:14.450 }, 00:23:14.450 { 00:23:14.450 "name": "pt2", 00:23:14.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:14.450 "is_configured": true, 00:23:14.450 "data_offset": 256, 00:23:14.450 "data_size": 7936 00:23:14.450 } 00:23:14.450 ] 00:23:14.450 } 00:23:14.450 } 00:23:14.450 }' 00:23:14.450 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:14.450 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:14.450 pt2' 00:23:14.450 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:14.450 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:14.450 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:14.709 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:14.709 "name": "pt1", 00:23:14.709 "aliases": [ 00:23:14.709 "00000000-0000-0000-0000-000000000001" 00:23:14.709 ], 00:23:14.709 "product_name": "passthru", 00:23:14.709 "block_size": 4128, 00:23:14.709 "num_blocks": 8192, 00:23:14.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:14.709 "md_size": 32, 00:23:14.709 "md_interleave": true, 00:23:14.709 "dif_type": 0, 00:23:14.709 "assigned_rate_limits": { 00:23:14.709 "rw_ios_per_sec": 0, 00:23:14.709 "rw_mbytes_per_sec": 0, 00:23:14.709 "r_mbytes_per_sec": 0, 00:23:14.709 "w_mbytes_per_sec": 0 00:23:14.709 }, 00:23:14.709 "claimed": true, 00:23:14.709 "claim_type": "exclusive_write", 00:23:14.709 "zoned": false, 00:23:14.709 "supported_io_types": { 00:23:14.709 "read": true, 00:23:14.709 "write": true, 00:23:14.709 "unmap": true, 00:23:14.709 "flush": true, 00:23:14.709 "reset": true, 00:23:14.709 "nvme_admin": false, 00:23:14.709 "nvme_io": false, 00:23:14.709 "nvme_io_md": false, 00:23:14.709 "write_zeroes": true, 00:23:14.709 "zcopy": true, 00:23:14.709 "get_zone_info": false, 00:23:14.709 "zone_management": false, 00:23:14.709 "zone_append": false, 00:23:14.709 "compare": false, 00:23:14.709 "compare_and_write": false, 00:23:14.709 "abort": true, 00:23:14.709 "seek_hole": false, 00:23:14.709 "seek_data": false, 00:23:14.709 "copy": true, 00:23:14.709 "nvme_iov_md": false 00:23:14.709 }, 00:23:14.709 "memory_domains": [ 00:23:14.709 { 00:23:14.709 "dma_device_id": "system", 00:23:14.709 "dma_device_type": 1 00:23:14.709 }, 00:23:14.709 { 00:23:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.709 "dma_device_type": 2 00:23:14.709 } 00:23:14.709 ], 00:23:14.709 "driver_specific": { 00:23:14.709 "passthru": { 00:23:14.709 "name": "pt1", 00:23:14.709 "base_bdev_name": "malloc1" 00:23:14.709 } 00:23:14.709 } 00:23:14.709 }' 00:23:14.709 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.709 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.709 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:14.709 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.709 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:14.966 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.224 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.224 "name": "pt2", 00:23:15.224 "aliases": [ 00:23:15.224 "00000000-0000-0000-0000-000000000002" 00:23:15.224 ], 00:23:15.224 "product_name": "passthru", 00:23:15.224 "block_size": 4128, 00:23:15.224 "num_blocks": 8192, 00:23:15.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.224 "md_size": 32, 00:23:15.224 "md_interleave": true, 00:23:15.224 "dif_type": 0, 00:23:15.224 "assigned_rate_limits": { 00:23:15.224 "rw_ios_per_sec": 0, 00:23:15.224 "rw_mbytes_per_sec": 0, 00:23:15.224 "r_mbytes_per_sec": 0, 00:23:15.224 "w_mbytes_per_sec": 0 00:23:15.224 }, 00:23:15.224 "claimed": true, 00:23:15.224 "claim_type": "exclusive_write", 00:23:15.224 "zoned": false, 00:23:15.224 "supported_io_types": { 00:23:15.224 "read": true, 00:23:15.224 "write": true, 00:23:15.224 "unmap": true, 00:23:15.224 "flush": true, 00:23:15.224 "reset": true, 00:23:15.224 "nvme_admin": false, 00:23:15.224 "nvme_io": false, 00:23:15.224 "nvme_io_md": false, 00:23:15.224 "write_zeroes": true, 00:23:15.224 "zcopy": true, 00:23:15.224 "get_zone_info": false, 00:23:15.224 "zone_management": false, 00:23:15.224 "zone_append": false, 00:23:15.224 "compare": false, 00:23:15.224 "compare_and_write": false, 00:23:15.224 "abort": true, 00:23:15.224 "seek_hole": false, 00:23:15.224 "seek_data": false, 00:23:15.224 "copy": true, 00:23:15.224 "nvme_iov_md": false 00:23:15.224 }, 00:23:15.224 "memory_domains": [ 00:23:15.224 { 00:23:15.224 "dma_device_id": "system", 00:23:15.224 "dma_device_type": 1 00:23:15.224 }, 00:23:15.224 { 00:23:15.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.224 "dma_device_type": 2 00:23:15.224 } 00:23:15.224 ], 00:23:15.224 "driver_specific": { 00:23:15.224 "passthru": { 00:23:15.224 "name": "pt2", 00:23:15.224 "base_bdev_name": "malloc2" 00:23:15.224 } 00:23:15.224 } 00:23:15.224 }' 00:23:15.224 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.224 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.224 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:23:15.224 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.224 10:30:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:15.482 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:15.740 [2024-07-15 10:30:40.314092] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' d5873836-57fd-4c31-9198-bd8fe198f174 '!=' d5873836-57fd-4c31-9198-bd8fe198f174 ']' 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:15.740 [2024-07-15 10:30:40.494640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.740 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.997 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:15.997 "name": "raid_bdev1", 00:23:15.997 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:15.997 "strip_size_kb": 0, 00:23:15.997 "state": "online", 00:23:15.997 "raid_level": "raid1", 00:23:15.997 "superblock": true, 00:23:15.997 "num_base_bdevs": 2, 00:23:15.997 "num_base_bdevs_discovered": 1, 00:23:15.997 "num_base_bdevs_operational": 1, 00:23:15.997 "base_bdevs_list": [ 00:23:15.997 { 00:23:15.997 "name": null, 00:23:15.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.997 "is_configured": false, 00:23:15.997 "data_offset": 256, 00:23:15.997 "data_size": 7936 00:23:15.997 }, 00:23:15.997 { 00:23:15.997 "name": "pt2", 00:23:15.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:15.997 "is_configured": true, 00:23:15.997 "data_offset": 256, 00:23:15.997 "data_size": 7936 00:23:15.997 } 00:23:15.997 ] 00:23:15.997 }' 00:23:15.997 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:15.997 10:30:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:16.560 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:16.560 [2024-07-15 10:30:41.340796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.560 [2024-07-15 10:30:41.340819] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.560 [2024-07-15 10:30:41.340862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.560 [2024-07-15 10:30:41.340891] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.560 [2024-07-15 10:30:41.340899] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25bd6e0 name raid_bdev1, state offline 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:16.816 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:17.072 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:23:17.072 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:23:17.072 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:23:17.072 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:23:17.072 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:23:17.072 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.072 [2024-07-15 10:30:41.846092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.072 [2024-07-15 10:30:41.846131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.072 [2024-07-15 10:30:41.846143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b89a0 00:23:17.072 [2024-07-15 10:30:41.846167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.072 [2024-07-15 10:30:41.847230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.072 [2024-07-15 10:30:41.847252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.072 [2024-07-15 10:30:41.847286] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:17.072 [2024-07-15 10:30:41.847304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.072 [2024-07-15 10:30:41.847351] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25b8420 00:23:17.072 [2024-07-15 10:30:41.847357] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:17.072 [2024-07-15 10:30:41.847400] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2420260 00:23:17.072 [2024-07-15 10:30:41.847447] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25b8420 00:23:17.072 [2024-07-15 10:30:41.847453] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25b8420 00:23:17.072 [2024-07-15 10:30:41.847488] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.072 pt2 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.330 10:30:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.330 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.330 "name": "raid_bdev1", 00:23:17.330 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:17.330 "strip_size_kb": 0, 00:23:17.330 "state": "online", 00:23:17.330 "raid_level": "raid1", 00:23:17.330 "superblock": true, 00:23:17.330 "num_base_bdevs": 2, 00:23:17.330 "num_base_bdevs_discovered": 1, 00:23:17.330 "num_base_bdevs_operational": 1, 00:23:17.330 "base_bdevs_list": [ 00:23:17.330 { 00:23:17.330 "name": null, 00:23:17.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.330 "is_configured": false, 00:23:17.330 "data_offset": 256, 00:23:17.330 "data_size": 7936 00:23:17.330 }, 00:23:17.330 { 00:23:17.330 "name": "pt2", 00:23:17.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:17.330 "is_configured": true, 00:23:17.330 "data_offset": 256, 00:23:17.330 "data_size": 7936 00:23:17.330 } 00:23:17.330 ] 00:23:17.330 }' 00:23:17.330 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.330 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:17.896 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:17.896 [2024-07-15 10:30:42.684260] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:17.896 [2024-07-15 10:30:42.684282] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.896 [2024-07-15 10:30:42.684325] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.896 [2024-07-15 10:30:42.684354] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.896 [2024-07-15 10:30:42.684362] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25b8420 name raid_bdev1, state offline 00:23:18.153 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.153 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:23:18.153 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:23:18.153 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:23:18.154 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:23:18.154 10:30:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:18.411 [2024-07-15 10:30:42.997057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:18.411 [2024-07-15 10:30:42.997088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.411 [2024-07-15 10:30:42.997100] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b86a0 00:23:18.411 [2024-07-15 10:30:42.997110] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.411 [2024-07-15 10:30:42.998170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.411 [2024-07-15 10:30:42.998198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:18.411 [2024-07-15 10:30:42.998232] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:18.411 [2024-07-15 10:30:42.998249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:18.411 [2024-07-15 10:30:42.998305] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:18.411 [2024-07-15 10:30:42.998313] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:18.411 [2024-07-15 10:30:42.998323] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25bb550 name raid_bdev1, state configuring 00:23:18.411 [2024-07-15 10:30:42.998338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:18.411 [2024-07-15 10:30:42.998373] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25bbff0 00:23:18.411 [2024-07-15 10:30:42.998380] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:18.411 [2024-07-15 10:30:42.998420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b94d0 00:23:18.411 [2024-07-15 10:30:42.998469] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25bbff0 00:23:18.411 [2024-07-15 10:30:42.998475] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25bbff0 00:23:18.411 [2024-07-15 10:30:42.998514] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.411 pt1 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.411 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.412 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.412 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.412 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.412 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.412 "name": "raid_bdev1", 00:23:18.412 "uuid": "d5873836-57fd-4c31-9198-bd8fe198f174", 00:23:18.412 "strip_size_kb": 0, 00:23:18.412 "state": "online", 00:23:18.412 "raid_level": "raid1", 00:23:18.412 "superblock": true, 00:23:18.412 "num_base_bdevs": 2, 00:23:18.412 "num_base_bdevs_discovered": 1, 00:23:18.412 "num_base_bdevs_operational": 1, 00:23:18.412 "base_bdevs_list": [ 00:23:18.412 { 00:23:18.412 "name": null, 00:23:18.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.412 "is_configured": false, 00:23:18.412 "data_offset": 256, 00:23:18.412 "data_size": 7936 00:23:18.412 }, 00:23:18.412 { 00:23:18.412 "name": "pt2", 00:23:18.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:18.412 "is_configured": true, 00:23:18.412 "data_offset": 256, 00:23:18.412 "data_size": 7936 00:23:18.412 } 00:23:18.412 ] 00:23:18.412 }' 00:23:18.412 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.412 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:18.976 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:23:18.976 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:23:19.234 [2024-07-15 10:30:43.971717] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' d5873836-57fd-4c31-9198-bd8fe198f174 '!=' d5873836-57fd-4c31-9198-bd8fe198f174 ']' 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 1901905 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 1901905 ']' 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 1901905 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.234 10:30:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1901905 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1901905' 00:23:19.492 killing process with pid 1901905 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 1901905 00:23:19.492 [2024-07-15 10:30:44.041814] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:19.492 [2024-07-15 10:30:44.041851] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:19.492 [2024-07-15 10:30:44.041880] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:19.492 [2024-07-15 10:30:44.041888] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25bbff0 name raid_bdev1, state offline 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 1901905 00:23:19.492 [2024-07-15 10:30:44.057063] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:23:19.492 00:23:19.492 real 0m11.789s 00:23:19.492 user 0m21.232s 00:23:19.492 sys 0m2.352s 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.492 10:30:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:19.492 ************************************ 00:23:19.492 END TEST raid_superblock_test_md_interleaved 00:23:19.492 ************************************ 00:23:19.492 10:30:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:19.492 10:30:44 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:19.492 10:30:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:23:19.492 10:30:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.492 10:30:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 ************************************ 00:23:19.751 START TEST raid_rebuild_test_sb_md_interleaved 00:23:19.751 ************************************ 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=1904316 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 1904316 /var/tmp/spdk-raid.sock 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 1904316 ']' 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:19.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.751 10:30:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:19.751 [2024-07-15 10:30:44.375299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:19.751 [2024-07-15 10:30:44.375347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904316 ] 00:23:19.751 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:19.751 Zero copy mechanism will not be used. 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.751 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.752 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:19.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.752 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:19.752 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:19.752 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:19.752 [2024-07-15 10:30:44.466991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.009 [2024-07-15 10:30:44.541184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.009 [2024-07-15 10:30:44.595044] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.009 [2024-07-15 10:30:44.595077] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:20.575 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.575 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:23:20.575 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:20.575 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:20.575 BaseBdev1_malloc 00:23:20.575 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:20.833 [2024-07-15 10:30:45.499097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:20.833 [2024-07-15 10:30:45.499132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.833 [2024-07-15 10:30:45.499164] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22f5610 00:23:20.833 [2024-07-15 10:30:45.499173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.833 [2024-07-15 10:30:45.500205] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.833 [2024-07-15 10:30:45.500228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:20.833 BaseBdev1 00:23:20.833 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:23:20.833 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:21.091 BaseBdev2_malloc 00:23:21.091 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:21.091 [2024-07-15 10:30:45.839946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:21.091 [2024-07-15 10:30:45.839979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.091 [2024-07-15 10:30:45.839995] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22eccc0 00:23:21.091 [2024-07-15 10:30:45.840019] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.091 [2024-07-15 10:30:45.840933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.091 [2024-07-15 10:30:45.840954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:21.091 BaseBdev2 00:23:21.091 10:30:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:21.349 spare_malloc 00:23:21.349 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:21.607 spare_delay 00:23:21.607 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:21.607 [2024-07-15 10:30:46.353149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:21.607 [2024-07-15 10:30:46.353185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.607 [2024-07-15 10:30:46.353200] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22ed8e0 00:23:21.607 [2024-07-15 10:30:46.353225] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.607 [2024-07-15 10:30:46.354179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.607 [2024-07-15 10:30:46.354203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:21.607 spare 00:23:21.607 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:21.865 [2024-07-15 10:30:46.525612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.865 [2024-07-15 10:30:46.526477] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:21.865 [2024-07-15 10:30:46.526604] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22f02b0 00:23:21.865 [2024-07-15 10:30:46.526613] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:21.865 [2024-07-15 10:30:46.526664] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2158210 00:23:21.865 [2024-07-15 10:30:46.526718] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22f02b0 00:23:21.865 [2024-07-15 10:30:46.526724] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22f02b0 00:23:21.865 [2024-07-15 10:30:46.526761] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.865 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.123 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.123 "name": "raid_bdev1", 00:23:22.123 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:22.123 "strip_size_kb": 0, 00:23:22.123 "state": "online", 00:23:22.123 "raid_level": "raid1", 00:23:22.123 "superblock": true, 00:23:22.123 "num_base_bdevs": 2, 00:23:22.123 "num_base_bdevs_discovered": 2, 00:23:22.123 "num_base_bdevs_operational": 2, 00:23:22.123 "base_bdevs_list": [ 00:23:22.123 { 00:23:22.123 "name": "BaseBdev1", 00:23:22.123 "uuid": "cbd0ebae-2c44-5078-9867-ca1fa089ebdf", 00:23:22.123 "is_configured": true, 00:23:22.123 "data_offset": 256, 00:23:22.123 "data_size": 7936 00:23:22.123 }, 00:23:22.123 { 00:23:22.123 "name": "BaseBdev2", 00:23:22.123 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:22.123 "is_configured": true, 00:23:22.123 "data_offset": 256, 00:23:22.123 "data_size": 7936 00:23:22.123 } 00:23:22.123 ] 00:23:22.123 }' 00:23:22.123 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.123 10:30:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:22.688 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:23:22.688 [2024-07-15 10:30:47.347857] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.688 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:23:22.688 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:22.688 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:22.946 [2024-07-15 10:30:47.688566] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.946 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.947 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.947 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.205 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.205 "name": "raid_bdev1", 00:23:23.205 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:23.205 "strip_size_kb": 0, 00:23:23.205 "state": "online", 00:23:23.205 "raid_level": "raid1", 00:23:23.205 "superblock": true, 00:23:23.205 "num_base_bdevs": 2, 00:23:23.205 "num_base_bdevs_discovered": 1, 00:23:23.205 "num_base_bdevs_operational": 1, 00:23:23.205 "base_bdevs_list": [ 00:23:23.205 { 00:23:23.205 "name": null, 00:23:23.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.205 "is_configured": false, 00:23:23.205 "data_offset": 256, 00:23:23.205 "data_size": 7936 00:23:23.205 }, 00:23:23.205 { 00:23:23.205 "name": "BaseBdev2", 00:23:23.205 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:23.205 "is_configured": true, 00:23:23.205 "data_offset": 256, 00:23:23.205 "data_size": 7936 00:23:23.205 } 00:23:23.205 ] 00:23:23.205 }' 00:23:23.205 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.205 10:30:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:23.772 10:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:23.772 [2024-07-15 10:30:48.514695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:23.772 [2024-07-15 10:30:48.517825] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22f1070 00:23:23.772 [2024-07-15 10:30:48.519349] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:23.772 10:30:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:25.148 "name": "raid_bdev1", 00:23:25.148 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:25.148 "strip_size_kb": 0, 00:23:25.148 "state": "online", 00:23:25.148 "raid_level": "raid1", 00:23:25.148 "superblock": true, 00:23:25.148 "num_base_bdevs": 2, 00:23:25.148 "num_base_bdevs_discovered": 2, 00:23:25.148 "num_base_bdevs_operational": 2, 00:23:25.148 "process": { 00:23:25.148 "type": "rebuild", 00:23:25.148 "target": "spare", 00:23:25.148 "progress": { 00:23:25.148 "blocks": 2816, 00:23:25.148 "percent": 35 00:23:25.148 } 00:23:25.148 }, 00:23:25.148 "base_bdevs_list": [ 00:23:25.148 { 00:23:25.148 "name": "spare", 00:23:25.148 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:25.148 "is_configured": true, 00:23:25.148 "data_offset": 256, 00:23:25.148 "data_size": 7936 00:23:25.148 }, 00:23:25.148 { 00:23:25.148 "name": "BaseBdev2", 00:23:25.148 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:25.148 "is_configured": true, 00:23:25.148 "data_offset": 256, 00:23:25.148 "data_size": 7936 00:23:25.148 } 00:23:25.148 ] 00:23:25.148 }' 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:25.148 10:30:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:25.408 [2024-07-15 10:30:49.947580] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:25.408 [2024-07-15 10:30:50.029782] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:25.408 [2024-07-15 10:30:50.029814] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.408 [2024-07-15 10:30:50.029825] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:25.408 [2024-07-15 10:30:50.029831] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.408 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.713 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.713 "name": "raid_bdev1", 00:23:25.713 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:25.713 "strip_size_kb": 0, 00:23:25.713 "state": "online", 00:23:25.713 "raid_level": "raid1", 00:23:25.713 "superblock": true, 00:23:25.713 "num_base_bdevs": 2, 00:23:25.713 "num_base_bdevs_discovered": 1, 00:23:25.713 "num_base_bdevs_operational": 1, 00:23:25.713 "base_bdevs_list": [ 00:23:25.713 { 00:23:25.713 "name": null, 00:23:25.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.713 "is_configured": false, 00:23:25.713 "data_offset": 256, 00:23:25.713 "data_size": 7936 00:23:25.713 }, 00:23:25.713 { 00:23:25.713 "name": "BaseBdev2", 00:23:25.713 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:25.713 "is_configured": true, 00:23:25.713 "data_offset": 256, 00:23:25.713 "data_size": 7936 00:23:25.713 } 00:23:25.713 ] 00:23:25.713 }' 00:23:25.713 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.713 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.984 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.242 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:26.242 "name": "raid_bdev1", 00:23:26.242 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:26.242 "strip_size_kb": 0, 00:23:26.242 "state": "online", 00:23:26.242 "raid_level": "raid1", 00:23:26.242 "superblock": true, 00:23:26.242 "num_base_bdevs": 2, 00:23:26.242 "num_base_bdevs_discovered": 1, 00:23:26.242 "num_base_bdevs_operational": 1, 00:23:26.242 "base_bdevs_list": [ 00:23:26.242 { 00:23:26.242 "name": null, 00:23:26.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.242 "is_configured": false, 00:23:26.242 "data_offset": 256, 00:23:26.242 "data_size": 7936 00:23:26.242 }, 00:23:26.242 { 00:23:26.242 "name": "BaseBdev2", 00:23:26.242 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:26.242 "is_configured": true, 00:23:26.242 "data_offset": 256, 00:23:26.242 "data_size": 7936 00:23:26.242 } 00:23:26.242 ] 00:23:26.242 }' 00:23:26.242 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:26.242 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:26.242 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:26.242 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:26.242 10:30:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:26.501 [2024-07-15 10:30:51.136302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:26.501 [2024-07-15 10:30:51.139466] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22e6830 00:23:26.501 [2024-07-15 10:30:51.140504] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:26.501 10:30:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.434 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:27.692 "name": "raid_bdev1", 00:23:27.692 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:27.692 "strip_size_kb": 0, 00:23:27.692 "state": "online", 00:23:27.692 "raid_level": "raid1", 00:23:27.692 "superblock": true, 00:23:27.692 "num_base_bdevs": 2, 00:23:27.692 "num_base_bdevs_discovered": 2, 00:23:27.692 "num_base_bdevs_operational": 2, 00:23:27.692 "process": { 00:23:27.692 "type": "rebuild", 00:23:27.692 "target": "spare", 00:23:27.692 "progress": { 00:23:27.692 "blocks": 2816, 00:23:27.692 "percent": 35 00:23:27.692 } 00:23:27.692 }, 00:23:27.692 "base_bdevs_list": [ 00:23:27.692 { 00:23:27.692 "name": "spare", 00:23:27.692 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:27.692 "is_configured": true, 00:23:27.692 "data_offset": 256, 00:23:27.692 "data_size": 7936 00:23:27.692 }, 00:23:27.692 { 00:23:27.692 "name": "BaseBdev2", 00:23:27.692 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:27.692 "is_configured": true, 00:23:27.692 "data_offset": 256, 00:23:27.692 "data_size": 7936 00:23:27.692 } 00:23:27.692 ] 00:23:27.692 }' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:23:27.692 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=873 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.692 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.950 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:27.950 "name": "raid_bdev1", 00:23:27.950 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:27.950 "strip_size_kb": 0, 00:23:27.950 "state": "online", 00:23:27.950 "raid_level": "raid1", 00:23:27.951 "superblock": true, 00:23:27.951 "num_base_bdevs": 2, 00:23:27.951 "num_base_bdevs_discovered": 2, 00:23:27.951 "num_base_bdevs_operational": 2, 00:23:27.951 "process": { 00:23:27.951 "type": "rebuild", 00:23:27.951 "target": "spare", 00:23:27.951 "progress": { 00:23:27.951 "blocks": 3584, 00:23:27.951 "percent": 45 00:23:27.951 } 00:23:27.951 }, 00:23:27.951 "base_bdevs_list": [ 00:23:27.951 { 00:23:27.951 "name": "spare", 00:23:27.951 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:27.951 "is_configured": true, 00:23:27.951 "data_offset": 256, 00:23:27.951 "data_size": 7936 00:23:27.951 }, 00:23:27.951 { 00:23:27.951 "name": "BaseBdev2", 00:23:27.951 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:27.951 "is_configured": true, 00:23:27.951 "data_offset": 256, 00:23:27.951 "data_size": 7936 00:23:27.951 } 00:23:27.951 ] 00:23:27.951 }' 00:23:27.951 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:27.951 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:27.951 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:27.951 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:27.951 10:30:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.883 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.141 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:29.141 "name": "raid_bdev1", 00:23:29.141 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:29.141 "strip_size_kb": 0, 00:23:29.141 "state": "online", 00:23:29.141 "raid_level": "raid1", 00:23:29.141 "superblock": true, 00:23:29.141 "num_base_bdevs": 2, 00:23:29.141 "num_base_bdevs_discovered": 2, 00:23:29.141 "num_base_bdevs_operational": 2, 00:23:29.141 "process": { 00:23:29.141 "type": "rebuild", 00:23:29.141 "target": "spare", 00:23:29.141 "progress": { 00:23:29.141 "blocks": 6656, 00:23:29.141 "percent": 83 00:23:29.141 } 00:23:29.141 }, 00:23:29.141 "base_bdevs_list": [ 00:23:29.141 { 00:23:29.141 "name": "spare", 00:23:29.141 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:29.141 "is_configured": true, 00:23:29.141 "data_offset": 256, 00:23:29.141 "data_size": 7936 00:23:29.141 }, 00:23:29.141 { 00:23:29.141 "name": "BaseBdev2", 00:23:29.141 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:29.141 "is_configured": true, 00:23:29.141 "data_offset": 256, 00:23:29.141 "data_size": 7936 00:23:29.141 } 00:23:29.141 ] 00:23:29.141 }' 00:23:29.141 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:29.141 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.141 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:29.141 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.141 10:30:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:23:29.708 [2024-07-15 10:30:54.261759] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:29.708 [2024-07-15 10:30:54.261801] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:29.708 [2024-07-15 10:30:54.261859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.274 10:30:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.531 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.531 "name": "raid_bdev1", 00:23:30.531 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:30.531 "strip_size_kb": 0, 00:23:30.531 "state": "online", 00:23:30.531 "raid_level": "raid1", 00:23:30.531 "superblock": true, 00:23:30.531 "num_base_bdevs": 2, 00:23:30.531 "num_base_bdevs_discovered": 2, 00:23:30.531 "num_base_bdevs_operational": 2, 00:23:30.531 "base_bdevs_list": [ 00:23:30.531 { 00:23:30.531 "name": "spare", 00:23:30.532 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:30.532 "is_configured": true, 00:23:30.532 "data_offset": 256, 00:23:30.532 "data_size": 7936 00:23:30.532 }, 00:23:30.532 { 00:23:30.532 "name": "BaseBdev2", 00:23:30.532 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:30.532 "is_configured": true, 00:23:30.532 "data_offset": 256, 00:23:30.532 "data_size": 7936 00:23:30.532 } 00:23:30.532 ] 00:23:30.532 }' 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.532 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.790 "name": "raid_bdev1", 00:23:30.790 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:30.790 "strip_size_kb": 0, 00:23:30.790 "state": "online", 00:23:30.790 "raid_level": "raid1", 00:23:30.790 "superblock": true, 00:23:30.790 "num_base_bdevs": 2, 00:23:30.790 "num_base_bdevs_discovered": 2, 00:23:30.790 "num_base_bdevs_operational": 2, 00:23:30.790 "base_bdevs_list": [ 00:23:30.790 { 00:23:30.790 "name": "spare", 00:23:30.790 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:30.790 "is_configured": true, 00:23:30.790 "data_offset": 256, 00:23:30.790 "data_size": 7936 00:23:30.790 }, 00:23:30.790 { 00:23:30.790 "name": "BaseBdev2", 00:23:30.790 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:30.790 "is_configured": true, 00:23:30.790 "data_offset": 256, 00:23:30.790 "data_size": 7936 00:23:30.790 } 00:23:30.790 ] 00:23:30.790 }' 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.790 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.048 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.048 "name": "raid_bdev1", 00:23:31.048 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:31.048 "strip_size_kb": 0, 00:23:31.048 "state": "online", 00:23:31.048 "raid_level": "raid1", 00:23:31.048 "superblock": true, 00:23:31.048 "num_base_bdevs": 2, 00:23:31.048 "num_base_bdevs_discovered": 2, 00:23:31.048 "num_base_bdevs_operational": 2, 00:23:31.048 "base_bdevs_list": [ 00:23:31.048 { 00:23:31.048 "name": "spare", 00:23:31.048 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:31.048 "is_configured": true, 00:23:31.048 "data_offset": 256, 00:23:31.048 "data_size": 7936 00:23:31.048 }, 00:23:31.048 { 00:23:31.048 "name": "BaseBdev2", 00:23:31.048 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:31.048 "is_configured": true, 00:23:31.048 "data_offset": 256, 00:23:31.048 "data_size": 7936 00:23:31.048 } 00:23:31.048 ] 00:23:31.048 }' 00:23:31.048 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.048 10:30:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:31.614 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:31.614 [2024-07-15 10:30:56.255036] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:31.614 [2024-07-15 10:30:56.255057] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:31.614 [2024-07-15 10:30:56.255096] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.614 [2024-07-15 10:30:56.255133] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.614 [2024-07-15 10:30:56.255141] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22f02b0 name raid_bdev1, state offline 00:23:31.614 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.614 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:23:31.874 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:23:31.874 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:23:31.874 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:23:31.874 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:31.874 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:32.133 [2024-07-15 10:30:56.756305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:32.133 [2024-07-15 10:30:56.756336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.133 [2024-07-15 10:30:56.756352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2157ef0 00:23:32.133 [2024-07-15 10:30:56.756360] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.133 [2024-07-15 10:30:56.757593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.133 [2024-07-15 10:30:56.757617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:32.133 [2024-07-15 10:30:56.757656] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:32.133 [2024-07-15 10:30:56.757674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.133 [2024-07-15 10:30:56.757734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:32.133 spare 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.133 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.133 [2024-07-15 10:30:56.858018] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22e2480 00:23:32.133 [2024-07-15 10:30:56.858029] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:32.133 [2024-07-15 10:30:56.858083] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22f1010 00:23:32.133 [2024-07-15 10:30:56.858144] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22e2480 00:23:32.133 [2024-07-15 10:30:56.858151] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x22e2480 00:23:32.133 [2024-07-15 10:30:56.858194] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.391 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.391 "name": "raid_bdev1", 00:23:32.391 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:32.391 "strip_size_kb": 0, 00:23:32.391 "state": "online", 00:23:32.391 "raid_level": "raid1", 00:23:32.391 "superblock": true, 00:23:32.391 "num_base_bdevs": 2, 00:23:32.391 "num_base_bdevs_discovered": 2, 00:23:32.392 "num_base_bdevs_operational": 2, 00:23:32.392 "base_bdevs_list": [ 00:23:32.392 { 00:23:32.392 "name": "spare", 00:23:32.392 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:32.392 "is_configured": true, 00:23:32.392 "data_offset": 256, 00:23:32.392 "data_size": 7936 00:23:32.392 }, 00:23:32.392 { 00:23:32.392 "name": "BaseBdev2", 00:23:32.392 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:32.392 "is_configured": true, 00:23:32.392 "data_offset": 256, 00:23:32.392 "data_size": 7936 00:23:32.392 } 00:23:32.392 ] 00:23:32.392 }' 00:23:32.392 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.392 10:30:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.650 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.908 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.908 "name": "raid_bdev1", 00:23:32.908 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:32.908 "strip_size_kb": 0, 00:23:32.908 "state": "online", 00:23:32.908 "raid_level": "raid1", 00:23:32.908 "superblock": true, 00:23:32.908 "num_base_bdevs": 2, 00:23:32.908 "num_base_bdevs_discovered": 2, 00:23:32.908 "num_base_bdevs_operational": 2, 00:23:32.908 "base_bdevs_list": [ 00:23:32.908 { 00:23:32.908 "name": "spare", 00:23:32.908 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:32.908 "is_configured": true, 00:23:32.908 "data_offset": 256, 00:23:32.908 "data_size": 7936 00:23:32.908 }, 00:23:32.908 { 00:23:32.908 "name": "BaseBdev2", 00:23:32.908 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:32.908 "is_configured": true, 00:23:32.908 "data_offset": 256, 00:23:32.908 "data_size": 7936 00:23:32.908 } 00:23:32.908 ] 00:23:32.908 }' 00:23:32.908 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.908 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:32.908 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.908 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:32.908 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.909 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:33.167 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.167 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:33.425 [2024-07-15 10:30:57.987518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.425 10:30:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:33.425 "name": "raid_bdev1", 00:23:33.425 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:33.425 "strip_size_kb": 0, 00:23:33.425 "state": "online", 00:23:33.425 "raid_level": "raid1", 00:23:33.425 "superblock": true, 00:23:33.425 "num_base_bdevs": 2, 00:23:33.425 "num_base_bdevs_discovered": 1, 00:23:33.425 "num_base_bdevs_operational": 1, 00:23:33.425 "base_bdevs_list": [ 00:23:33.425 { 00:23:33.425 "name": null, 00:23:33.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.425 "is_configured": false, 00:23:33.425 "data_offset": 256, 00:23:33.425 "data_size": 7936 00:23:33.425 }, 00:23:33.425 { 00:23:33.425 "name": "BaseBdev2", 00:23:33.425 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:33.425 "is_configured": true, 00:23:33.425 "data_offset": 256, 00:23:33.425 "data_size": 7936 00:23:33.425 } 00:23:33.425 ] 00:23:33.425 }' 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:33.425 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.992 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:34.250 [2024-07-15 10:30:58.793613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.250 [2024-07-15 10:30:58.793723] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:34.250 [2024-07-15 10:30:58.793734] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:34.250 [2024-07-15 10:30:58.793755] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.250 [2024-07-15 10:30:58.796819] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22f4680 00:23:34.250 [2024-07-15 10:30:58.798460] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:34.250 10:30:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.185 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.444 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.444 "name": "raid_bdev1", 00:23:35.444 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:35.444 "strip_size_kb": 0, 00:23:35.444 "state": "online", 00:23:35.444 "raid_level": "raid1", 00:23:35.444 "superblock": true, 00:23:35.444 "num_base_bdevs": 2, 00:23:35.444 "num_base_bdevs_discovered": 2, 00:23:35.444 "num_base_bdevs_operational": 2, 00:23:35.444 "process": { 00:23:35.444 "type": "rebuild", 00:23:35.444 "target": "spare", 00:23:35.444 "progress": { 00:23:35.444 "blocks": 2816, 00:23:35.444 "percent": 35 00:23:35.444 } 00:23:35.444 }, 00:23:35.444 "base_bdevs_list": [ 00:23:35.444 { 00:23:35.444 "name": "spare", 00:23:35.444 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:35.444 "is_configured": true, 00:23:35.444 "data_offset": 256, 00:23:35.444 "data_size": 7936 00:23:35.444 }, 00:23:35.444 { 00:23:35.444 "name": "BaseBdev2", 00:23:35.444 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:35.444 "is_configured": true, 00:23:35.444 "data_offset": 256, 00:23:35.444 "data_size": 7936 00:23:35.444 } 00:23:35.444 ] 00:23:35.444 }' 00:23:35.444 10:30:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.444 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.444 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.444 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.445 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:35.704 [2024-07-15 10:31:00.238649] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:35.704 [2024-07-15 10:31:00.308808] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:35.704 [2024-07-15 10:31:00.308840] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.704 [2024-07-15 10:31:00.308850] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:35.704 [2024-07-15 10:31:00.308872] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.704 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.963 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.963 "name": "raid_bdev1", 00:23:35.963 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:35.964 "strip_size_kb": 0, 00:23:35.964 "state": "online", 00:23:35.964 "raid_level": "raid1", 00:23:35.964 "superblock": true, 00:23:35.964 "num_base_bdevs": 2, 00:23:35.964 "num_base_bdevs_discovered": 1, 00:23:35.964 "num_base_bdevs_operational": 1, 00:23:35.964 "base_bdevs_list": [ 00:23:35.964 { 00:23:35.964 "name": null, 00:23:35.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.964 "is_configured": false, 00:23:35.964 "data_offset": 256, 00:23:35.964 "data_size": 7936 00:23:35.964 }, 00:23:35.964 { 00:23:35.964 "name": "BaseBdev2", 00:23:35.964 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:35.964 "is_configured": true, 00:23:35.964 "data_offset": 256, 00:23:35.964 "data_size": 7936 00:23:35.964 } 00:23:35.964 ] 00:23:35.964 }' 00:23:35.964 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.964 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:36.223 10:31:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:36.482 [2024-07-15 10:31:01.142439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:36.482 [2024-07-15 10:31:01.142472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.482 [2024-07-15 10:31:01.142489] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22f3a20 00:23:36.482 [2024-07-15 10:31:01.142497] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.482 [2024-07-15 10:31:01.142628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.482 [2024-07-15 10:31:01.142638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:36.482 [2024-07-15 10:31:01.142672] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:36.482 [2024-07-15 10:31:01.142679] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:36.482 [2024-07-15 10:31:01.142686] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:36.482 [2024-07-15 10:31:01.142698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.482 [2024-07-15 10:31:01.145691] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22e2760 00:23:36.482 [2024-07-15 10:31:01.146730] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:36.482 spare 00:23:36.482 10:31:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.419 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.678 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.678 "name": "raid_bdev1", 00:23:37.678 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:37.678 "strip_size_kb": 0, 00:23:37.678 "state": "online", 00:23:37.678 "raid_level": "raid1", 00:23:37.678 "superblock": true, 00:23:37.678 "num_base_bdevs": 2, 00:23:37.678 "num_base_bdevs_discovered": 2, 00:23:37.678 "num_base_bdevs_operational": 2, 00:23:37.678 "process": { 00:23:37.678 "type": "rebuild", 00:23:37.678 "target": "spare", 00:23:37.678 "progress": { 00:23:37.678 "blocks": 2816, 00:23:37.678 "percent": 35 00:23:37.678 } 00:23:37.678 }, 00:23:37.678 "base_bdevs_list": [ 00:23:37.678 { 00:23:37.678 "name": "spare", 00:23:37.678 "uuid": "1cd4a5b6-a7ad-55d4-8b7e-dfd32a052d69", 00:23:37.678 "is_configured": true, 00:23:37.678 "data_offset": 256, 00:23:37.678 "data_size": 7936 00:23:37.678 }, 00:23:37.678 { 00:23:37.678 "name": "BaseBdev2", 00:23:37.678 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:37.678 "is_configured": true, 00:23:37.678 "data_offset": 256, 00:23:37.678 "data_size": 7936 00:23:37.678 } 00:23:37.678 ] 00:23:37.678 }' 00:23:37.678 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:37.678 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.678 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:37.678 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.678 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:37.937 [2024-07-15 10:31:02.566837] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:37.937 [2024-07-15 10:31:02.657064] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:37.937 [2024-07-15 10:31:02.657095] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.937 [2024-07-15 10:31:02.657104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:37.937 [2024-07-15 10:31:02.657109] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:37.937 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:37.937 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:37.937 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:37.937 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:37.937 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.938 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.195 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.196 "name": "raid_bdev1", 00:23:38.196 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:38.196 "strip_size_kb": 0, 00:23:38.196 "state": "online", 00:23:38.196 "raid_level": "raid1", 00:23:38.196 "superblock": true, 00:23:38.196 "num_base_bdevs": 2, 00:23:38.196 "num_base_bdevs_discovered": 1, 00:23:38.196 "num_base_bdevs_operational": 1, 00:23:38.196 "base_bdevs_list": [ 00:23:38.196 { 00:23:38.196 "name": null, 00:23:38.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.196 "is_configured": false, 00:23:38.196 "data_offset": 256, 00:23:38.196 "data_size": 7936 00:23:38.196 }, 00:23:38.196 { 00:23:38.196 "name": "BaseBdev2", 00:23:38.196 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:38.196 "is_configured": true, 00:23:38.196 "data_offset": 256, 00:23:38.196 "data_size": 7936 00:23:38.196 } 00:23:38.196 ] 00:23:38.196 }' 00:23:38.196 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.196 10:31:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.762 "name": "raid_bdev1", 00:23:38.762 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:38.762 "strip_size_kb": 0, 00:23:38.762 "state": "online", 00:23:38.762 "raid_level": "raid1", 00:23:38.762 "superblock": true, 00:23:38.762 "num_base_bdevs": 2, 00:23:38.762 "num_base_bdevs_discovered": 1, 00:23:38.762 "num_base_bdevs_operational": 1, 00:23:38.762 "base_bdevs_list": [ 00:23:38.762 { 00:23:38.762 "name": null, 00:23:38.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.762 "is_configured": false, 00:23:38.762 "data_offset": 256, 00:23:38.762 "data_size": 7936 00:23:38.762 }, 00:23:38.762 { 00:23:38.762 "name": "BaseBdev2", 00:23:38.762 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:38.762 "is_configured": true, 00:23:38.762 "data_offset": 256, 00:23:38.762 "data_size": 7936 00:23:38.762 } 00:23:38.762 ] 00:23:38.762 }' 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:38.762 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:39.022 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:39.022 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:39.023 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:39.339 [2024-07-15 10:31:03.907756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:39.339 [2024-07-15 10:31:03.907790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.339 [2024-07-15 10:31:03.907805] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e3060 00:23:39.339 [2024-07-15 10:31:03.907813] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.339 [2024-07-15 10:31:03.907935] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.339 [2024-07-15 10:31:03.907945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.339 [2024-07-15 10:31:03.908004] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:39.339 [2024-07-15 10:31:03.908012] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:39.339 [2024-07-15 10:31:03.908019] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:39.339 BaseBdev1 00:23:39.339 10:31:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.273 10:31:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.531 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.531 "name": "raid_bdev1", 00:23:40.531 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:40.531 "strip_size_kb": 0, 00:23:40.531 "state": "online", 00:23:40.531 "raid_level": "raid1", 00:23:40.531 "superblock": true, 00:23:40.531 "num_base_bdevs": 2, 00:23:40.531 "num_base_bdevs_discovered": 1, 00:23:40.531 "num_base_bdevs_operational": 1, 00:23:40.531 "base_bdevs_list": [ 00:23:40.531 { 00:23:40.531 "name": null, 00:23:40.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.531 "is_configured": false, 00:23:40.531 "data_offset": 256, 00:23:40.531 "data_size": 7936 00:23:40.531 }, 00:23:40.531 { 00:23:40.531 "name": "BaseBdev2", 00:23:40.531 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:40.531 "is_configured": true, 00:23:40.531 "data_offset": 256, 00:23:40.531 "data_size": 7936 00:23:40.531 } 00:23:40.531 ] 00:23:40.531 }' 00:23:40.531 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.531 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:41.096 "name": "raid_bdev1", 00:23:41.096 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:41.096 "strip_size_kb": 0, 00:23:41.096 "state": "online", 00:23:41.096 "raid_level": "raid1", 00:23:41.096 "superblock": true, 00:23:41.096 "num_base_bdevs": 2, 00:23:41.096 "num_base_bdevs_discovered": 1, 00:23:41.096 "num_base_bdevs_operational": 1, 00:23:41.096 "base_bdevs_list": [ 00:23:41.096 { 00:23:41.096 "name": null, 00:23:41.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.096 "is_configured": false, 00:23:41.096 "data_offset": 256, 00:23:41.096 "data_size": 7936 00:23:41.096 }, 00:23:41.096 { 00:23:41.096 "name": "BaseBdev2", 00:23:41.096 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:41.096 "is_configured": true, 00:23:41.096 "data_offset": 256, 00:23:41.096 "data_size": 7936 00:23:41.096 } 00:23:41.096 ] 00:23:41.096 }' 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:41.096 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:41.353 [2024-07-15 10:31:05.985109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:41.353 [2024-07-15 10:31:05.985199] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:41.353 [2024-07-15 10:31:05.985210] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:41.353 request: 00:23:41.353 { 00:23:41.353 "base_bdev": "BaseBdev1", 00:23:41.353 "raid_bdev": "raid_bdev1", 00:23:41.353 "method": "bdev_raid_add_base_bdev", 00:23:41.353 "req_id": 1 00:23:41.353 } 00:23:41.353 Got JSON-RPC error response 00:23:41.353 response: 00:23:41.353 { 00:23:41.353 "code": -22, 00:23:41.353 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:41.353 } 00:23:41.353 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:23:41.353 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:41.353 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:41.353 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:41.353 10:31:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.284 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.285 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.285 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.542 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.542 "name": "raid_bdev1", 00:23:42.542 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:42.542 "strip_size_kb": 0, 00:23:42.542 "state": "online", 00:23:42.542 "raid_level": "raid1", 00:23:42.542 "superblock": true, 00:23:42.542 "num_base_bdevs": 2, 00:23:42.542 "num_base_bdevs_discovered": 1, 00:23:42.542 "num_base_bdevs_operational": 1, 00:23:42.542 "base_bdevs_list": [ 00:23:42.542 { 00:23:42.542 "name": null, 00:23:42.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.542 "is_configured": false, 00:23:42.542 "data_offset": 256, 00:23:42.542 "data_size": 7936 00:23:42.542 }, 00:23:42.542 { 00:23:42.542 "name": "BaseBdev2", 00:23:42.542 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:42.542 "is_configured": true, 00:23:42.542 "data_offset": 256, 00:23:42.542 "data_size": 7936 00:23:42.542 } 00:23:42.542 ] 00:23:42.542 }' 00:23:42.542 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.542 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:43.108 "name": "raid_bdev1", 00:23:43.108 "uuid": "4a34f1ce-81a2-44c2-8bfb-325eb6a46b76", 00:23:43.108 "strip_size_kb": 0, 00:23:43.108 "state": "online", 00:23:43.108 "raid_level": "raid1", 00:23:43.108 "superblock": true, 00:23:43.108 "num_base_bdevs": 2, 00:23:43.108 "num_base_bdevs_discovered": 1, 00:23:43.108 "num_base_bdevs_operational": 1, 00:23:43.108 "base_bdevs_list": [ 00:23:43.108 { 00:23:43.108 "name": null, 00:23:43.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.108 "is_configured": false, 00:23:43.108 "data_offset": 256, 00:23:43.108 "data_size": 7936 00:23:43.108 }, 00:23:43.108 { 00:23:43.108 "name": "BaseBdev2", 00:23:43.108 "uuid": "7bb18923-ad5f-5c36-b2df-1b1ba598aa83", 00:23:43.108 "is_configured": true, 00:23:43.108 "data_offset": 256, 00:23:43.108 "data_size": 7936 00:23:43.108 } 00:23:43.108 ] 00:23:43.108 }' 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 1904316 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 1904316 ']' 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 1904316 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.108 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1904316 00:23:43.367 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:43.367 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:43.367 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1904316' 00:23:43.367 killing process with pid 1904316 00:23:43.367 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 1904316 00:23:43.367 Received shutdown signal, test time was about 60.000000 seconds 00:23:43.367 00:23:43.367 Latency(us) 00:23:43.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.367 =================================================================================================================== 00:23:43.367 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.367 [2024-07-15 10:31:07.939676] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:43.367 [2024-07-15 10:31:07.939741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.367 [2024-07-15 10:31:07.939772] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.367 [2024-07-15 10:31:07.939779] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22e2480 name raid_bdev1, state offline 00:23:43.367 10:31:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 1904316 00:23:43.367 [2024-07-15 10:31:07.962698] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:43.367 10:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:23:43.367 00:23:43.367 real 0m23.817s 00:23:43.367 user 0m36.496s 00:23:43.367 sys 0m3.135s 00:23:43.367 10:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.367 10:31:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.367 ************************************ 00:23:43.367 END TEST raid_rebuild_test_sb_md_interleaved 00:23:43.367 ************************************ 00:23:43.626 10:31:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:43.626 10:31:08 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:23:43.626 10:31:08 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:23:43.626 10:31:08 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 1904316 ']' 00:23:43.626 10:31:08 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 1904316 00:23:43.626 10:31:08 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:23:43.626 00:23:43.626 real 14m18.264s 00:23:43.626 user 23m41.064s 00:23:43.626 sys 2m42.308s 00:23:43.626 10:31:08 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.626 10:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:43.626 ************************************ 00:23:43.626 END TEST bdev_raid 00:23:43.626 ************************************ 00:23:43.626 10:31:08 -- common/autotest_common.sh@1142 -- # return 0 00:23:43.626 10:31:08 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:43.626 10:31:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:43.626 10:31:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.626 10:31:08 -- common/autotest_common.sh@10 -- # set +x 00:23:43.626 ************************************ 00:23:43.626 START TEST bdevperf_config 00:23:43.626 ************************************ 00:23:43.626 10:31:08 bdevperf_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:23:43.626 * Looking for test storage... 00:23:43.626 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:43.626 00:23:43.626 10:31:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:43.885 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:43.885 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:43.885 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:43.885 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:43.885 10:31:08 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:46.416 10:31:11 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 10:31:08.494473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:46.416 [2024-07-15 10:31:08.494519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908785 ] 00:23:46.416 Using job config with 4 jobs 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:46.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.416 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:46.417 [2024-07-15 10:31:08.597324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.417 [2024-07-15 10:31:08.681775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.417 cpumask for '\''job0'\'' is too big 00:23:46.417 cpumask for '\''job1'\'' is too big 00:23:46.417 cpumask for '\''job2'\'' is too big 00:23:46.417 cpumask for '\''job3'\'' is too big 00:23:46.417 Running I/O for 2 seconds... 00:23:46.417 00:23:46.417 Latency(us) 00:23:46.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.01 38684.82 37.78 0.00 0.00 6614.94 1225.52 9961.47 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.01 38661.55 37.76 0.00 0.00 6609.52 1140.33 8860.47 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.01 38638.38 37.73 0.00 0.00 6604.35 1140.33 7707.03 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.02 38615.26 37.71 0.00 0.00 6598.93 1140.33 7444.89 00:23:46.417 =================================================================================================================== 00:23:46.417 Total : 154600.02 150.98 0.00 0.00 6606.93 1140.33 9961.47' 00:23:46.417 10:31:11 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 10:31:08.494473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:46.417 [2024-07-15 10:31:08.494519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908785 ] 00:23:46.417 Using job config with 4 jobs 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:46.417 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.417 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:46.417 [2024-07-15 10:31:08.597324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.417 [2024-07-15 10:31:08.681775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.417 cpumask for '\''job0'\'' is too big 00:23:46.417 cpumask for '\''job1'\'' is too big 00:23:46.417 cpumask for '\''job2'\'' is too big 00:23:46.417 cpumask for '\''job3'\'' is too big 00:23:46.417 Running I/O for 2 seconds... 00:23:46.417 00:23:46.417 Latency(us) 00:23:46.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.01 38684.82 37.78 0.00 0.00 6614.94 1225.52 9961.47 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.01 38661.55 37.76 0.00 0.00 6609.52 1140.33 8860.47 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.01 38638.38 37.73 0.00 0.00 6604.35 1140.33 7707.03 00:23:46.417 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.417 Malloc0 : 2.02 38615.26 37.71 0.00 0.00 6598.93 1140.33 7444.89 00:23:46.417 =================================================================================================================== 00:23:46.418 Total : 154600.02 150.98 0.00 0.00 6606.93 1140.33 9961.47' 00:23:46.418 10:31:11 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 10:31:08.494473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:46.418 [2024-07-15 10:31:08.494519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908785 ] 00:23:46.418 Using job config with 4 jobs 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:46.418 [2024-07-15 10:31:08.597324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.418 [2024-07-15 10:31:08.681775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.418 cpumask for '\''job0'\'' is too big 00:23:46.418 cpumask for '\''job1'\'' is too big 00:23:46.418 cpumask for '\''job2'\'' is too big 00:23:46.418 cpumask for '\''job3'\'' is too big 00:23:46.418 Running I/O for 2 seconds... 00:23:46.418 00:23:46.418 Latency(us) 00:23:46.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.418 Malloc0 : 2.01 38684.82 37.78 0.00 0.00 6614.94 1225.52 9961.47 00:23:46.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.418 Malloc0 : 2.01 38661.55 37.76 0.00 0.00 6609.52 1140.33 8860.47 00:23:46.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.418 Malloc0 : 2.01 38638.38 37.73 0.00 0.00 6604.35 1140.33 7707.03 00:23:46.418 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:46.418 Malloc0 : 2.02 38615.26 37.71 0.00 0.00 6598.93 1140.33 7444.89 00:23:46.418 =================================================================================================================== 00:23:46.418 Total : 154600.02 150.98 0.00 0.00 6606.93 1140.33 9961.47' 00:23:46.418 10:31:11 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:46.418 10:31:11 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:46.418 10:31:11 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:46.418 10:31:11 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:46.418 [2024-07-15 10:31:11.095233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:46.418 [2024-07-15 10:31:11.095284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909184 ] 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:46.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.418 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:46.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.419 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:46.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.419 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:46.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.419 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:46.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.419 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:46.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:46.419 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:46.419 [2024-07-15 10:31:11.199574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.677 [2024-07-15 10:31:11.285522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.677 cpumask for 'job0' is too big 00:23:46.677 cpumask for 'job1' is too big 00:23:46.677 cpumask for 'job2' is too big 00:23:46.677 cpumask for 'job3' is too big 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:23:49.209 Running I/O for 2 seconds... 00:23:49.209 00:23:49.209 Latency(us) 00:23:49.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:49.209 Malloc0 : 2.01 39133.01 38.22 0.00 0.00 6534.40 1140.33 10066.33 00:23:49.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:49.209 Malloc0 : 2.01 39109.97 38.19 0.00 0.00 6528.98 1179.65 8808.04 00:23:49.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:49.209 Malloc0 : 2.01 39150.11 38.23 0.00 0.00 6513.41 1127.22 7759.46 00:23:49.209 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:49.209 Malloc0 : 2.02 39127.22 38.21 0.00 0.00 6508.29 1133.77 7182.75 00:23:49.209 =================================================================================================================== 00:23:49.209 Total : 156520.31 152.85 0.00 0.00 6521.25 1127.22 10066.33' 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.209 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.209 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:49.209 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:49.209 10:31:13 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:51.740 10:31:16 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 10:31:13.699781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:51.740 [2024-07-15 10:31:13.699830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909588 ] 00:23:51.740 Using job config with 3 jobs 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.740 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.741 [2024-07-15 10:31:13.796797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.741 [2024-07-15 10:31:13.883618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.741 cpumask for '\''job0'\'' is too big 00:23:51.741 cpumask for '\''job1'\'' is too big 00:23:51.741 cpumask for '\''job2'\'' is too big 00:23:51.741 Running I/O for 2 seconds... 00:23:51.741 00:23:51.741 Latency(us) 00:23:51.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.741 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.741 Malloc0 : 2.01 52123.26 50.90 0.00 0.00 4906.58 1192.76 7549.75 00:23:51.741 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.741 Malloc0 : 2.01 52094.13 50.87 0.00 0.00 4902.11 1186.20 6265.24 00:23:51.741 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.741 Malloc0 : 2.01 52065.22 50.84 0.00 0.00 4897.85 1146.88 5321.52 00:23:51.741 =================================================================================================================== 00:23:51.741 Total : 156282.62 152.62 0.00 0.00 4902.18 1146.88 7549.75' 00:23:51.741 10:31:16 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 10:31:13.699781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:51.741 [2024-07-15 10:31:13.699830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909588 ] 00:23:51.741 Using job config with 3 jobs 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.741 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.741 [2024-07-15 10:31:13.796797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.741 [2024-07-15 10:31:13.883618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.742 cpumask for '\''job0'\'' is too big 00:23:51.742 cpumask for '\''job1'\'' is too big 00:23:51.742 cpumask for '\''job2'\'' is too big 00:23:51.742 Running I/O for 2 seconds... 00:23:51.742 00:23:51.742 Latency(us) 00:23:51.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.742 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.742 Malloc0 : 2.01 52123.26 50.90 0.00 0.00 4906.58 1192.76 7549.75 00:23:51.742 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.742 Malloc0 : 2.01 52094.13 50.87 0.00 0.00 4902.11 1186.20 6265.24 00:23:51.742 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.742 Malloc0 : 2.01 52065.22 50.84 0.00 0.00 4897.85 1146.88 5321.52 00:23:51.742 =================================================================================================================== 00:23:51.742 Total : 156282.62 152.62 0.00 0.00 4902.18 1146.88 7549.75' 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 10:31:13.699781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:51.742 [2024-07-15 10:31:13.699830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909588 ] 00:23:51.742 Using job config with 3 jobs 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:51.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:51.742 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:51.742 [2024-07-15 10:31:13.796797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.742 [2024-07-15 10:31:13.883618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.742 cpumask for '\''job0'\'' is too big 00:23:51.742 cpumask for '\''job1'\'' is too big 00:23:51.742 cpumask for '\''job2'\'' is too big 00:23:51.742 Running I/O for 2 seconds... 00:23:51.742 00:23:51.742 Latency(us) 00:23:51.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.742 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.742 Malloc0 : 2.01 52123.26 50.90 0.00 0.00 4906.58 1192.76 7549.75 00:23:51.742 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.742 Malloc0 : 2.01 52094.13 50.87 0.00 0.00 4902.11 1186.20 6265.24 00:23:51.742 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:51.742 Malloc0 : 2.01 52065.22 50.84 0.00 0.00 4897.85 1146.88 5321.52 00:23:51.742 =================================================================================================================== 00:23:51.742 Total : 156282.62 152.62 0.00 0.00 4902.18 1146.88 7549.75' 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:51.742 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:51.742 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:51.742 10:31:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:51.743 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:51.743 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:23:51.743 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:23:51.743 10:31:16 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:54.371 10:31:18 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 10:31:16.316315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.371 [2024-07-15 10:31:16.316366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910126 ] 00:23:54.371 Using job config with 4 jobs 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:54.371 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.371 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:54.371 [2024-07-15 10:31:16.416910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.371 [2024-07-15 10:31:16.503608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.371 cpumask for '\''job0'\'' is too big 00:23:54.371 cpumask for '\''job1'\'' is too big 00:23:54.371 cpumask for '\''job2'\'' is too big 00:23:54.371 cpumask for '\''job3'\'' is too big 00:23:54.371 Running I/O for 2 seconds... 00:23:54.371 00:23:54.371 Latency(us) 00:23:54.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.371 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.371 Malloc0 : 2.03 19202.64 18.75 0.00 0.00 13320.41 2490.37 21181.24 00:23:54.371 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19191.73 18.74 0.00 0.00 13321.49 3053.98 21076.38 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19181.15 18.73 0.00 0.00 13296.83 2437.94 18559.80 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19170.30 18.72 0.00 0.00 13295.54 2962.23 18559.80 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19159.80 18.71 0.00 0.00 13273.16 2437.94 16148.07 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19149.04 18.70 0.00 0.00 13271.24 2962.23 16148.07 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19138.47 18.69 0.00 0.00 13249.88 2424.83 14155.78 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19127.76 18.68 0.00 0.00 13248.64 2962.23 14155.78 00:23:54.372 =================================================================================================================== 00:23:54.372 Total : 153320.90 149.73 0.00 0.00 13284.65 2424.83 21181.24' 00:23:54.372 10:31:18 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 10:31:16.316315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.372 [2024-07-15 10:31:16.316366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910126 ] 00:23:54.372 Using job config with 4 jobs 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:54.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.372 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:54.372 [2024-07-15 10:31:16.416910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.372 [2024-07-15 10:31:16.503608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.372 cpumask for '\''job0'\'' is too big 00:23:54.372 cpumask for '\''job1'\'' is too big 00:23:54.372 cpumask for '\''job2'\'' is too big 00:23:54.372 cpumask for '\''job3'\'' is too big 00:23:54.372 Running I/O for 2 seconds... 00:23:54.372 00:23:54.372 Latency(us) 00:23:54.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19202.64 18.75 0.00 0.00 13320.41 2490.37 21181.24 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19191.73 18.74 0.00 0.00 13321.49 3053.98 21076.38 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19181.15 18.73 0.00 0.00 13296.83 2437.94 18559.80 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19170.30 18.72 0.00 0.00 13295.54 2962.23 18559.80 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19159.80 18.71 0.00 0.00 13273.16 2437.94 16148.07 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc1 : 2.03 19149.04 18.70 0.00 0.00 13271.24 2962.23 16148.07 00:23:54.372 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.372 Malloc0 : 2.03 19138.47 18.69 0.00 0.00 13249.88 2424.83 14155.78 00:23:54.372 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc1 : 2.03 19127.76 18.68 0.00 0.00 13248.64 2962.23 14155.78 00:23:54.373 =================================================================================================================== 00:23:54.373 Total : 153320.90 149.73 0.00 0.00 13284.65 2424.83 21181.24' 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 10:31:16.316315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.373 [2024-07-15 10:31:16.316366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910126 ] 00:23:54.373 Using job config with 4 jobs 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:54.373 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.373 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:54.373 [2024-07-15 10:31:16.416910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.373 [2024-07-15 10:31:16.503608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.373 cpumask for '\''job0'\'' is too big 00:23:54.373 cpumask for '\''job1'\'' is too big 00:23:54.373 cpumask for '\''job2'\'' is too big 00:23:54.373 cpumask for '\''job3'\'' is too big 00:23:54.373 Running I/O for 2 seconds... 00:23:54.373 00:23:54.373 Latency(us) 00:23:54.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.373 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc0 : 2.03 19202.64 18.75 0.00 0.00 13320.41 2490.37 21181.24 00:23:54.373 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc1 : 2.03 19191.73 18.74 0.00 0.00 13321.49 3053.98 21076.38 00:23:54.373 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc0 : 2.03 19181.15 18.73 0.00 0.00 13296.83 2437.94 18559.80 00:23:54.373 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc1 : 2.03 19170.30 18.72 0.00 0.00 13295.54 2962.23 18559.80 00:23:54.373 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc0 : 2.03 19159.80 18.71 0.00 0.00 13273.16 2437.94 16148.07 00:23:54.373 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc1 : 2.03 19149.04 18.70 0.00 0.00 13271.24 2962.23 16148.07 00:23:54.373 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc0 : 2.03 19138.47 18.69 0.00 0.00 13249.88 2424.83 14155.78 00:23:54.373 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:54.373 Malloc1 : 2.03 19127.76 18.68 0.00 0.00 13248.64 2962.23 14155.78 00:23:54.373 =================================================================================================================== 00:23:54.373 Total : 153320.90 149.73 0.00 0.00 13284.65 2424.83 21181.24' 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:23:54.373 10:31:18 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:54.373 00:23:54.373 real 0m10.578s 00:23:54.373 user 0m9.490s 00:23:54.373 sys 0m0.961s 00:23:54.373 10:31:18 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:54.373 10:31:18 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:23:54.373 ************************************ 00:23:54.373 END TEST bdevperf_config 00:23:54.373 ************************************ 00:23:54.373 10:31:18 -- common/autotest_common.sh@1142 -- # return 0 00:23:54.373 10:31:18 -- spdk/autotest.sh@192 -- # uname -s 00:23:54.373 10:31:18 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:23:54.374 10:31:18 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:54.374 10:31:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:54.374 10:31:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.374 10:31:18 -- common/autotest_common.sh@10 -- # set +x 00:23:54.374 ************************************ 00:23:54.374 START TEST reactor_set_interrupt 00:23:54.374 ************************************ 00:23:54.374 10:31:18 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:54.374 * Looking for test storage... 00:23:54.374 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:23:54.374 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:23:54.374 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:23:54.374 10:31:19 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:23:54.375 10:31:19 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:23:54.375 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:54.375 #define SPDK_CONFIG_H 00:23:54.375 #define SPDK_CONFIG_APPS 1 00:23:54.375 #define SPDK_CONFIG_ARCH native 00:23:54.375 #undef SPDK_CONFIG_ASAN 00:23:54.375 #undef SPDK_CONFIG_AVAHI 00:23:54.375 #undef SPDK_CONFIG_CET 00:23:54.375 #define SPDK_CONFIG_COVERAGE 1 00:23:54.375 #define SPDK_CONFIG_CROSS_PREFIX 00:23:54.375 #define SPDK_CONFIG_CRYPTO 1 00:23:54.375 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:23:54.375 #undef SPDK_CONFIG_CUSTOMOCF 00:23:54.375 #undef SPDK_CONFIG_DAOS 00:23:54.375 #define SPDK_CONFIG_DAOS_DIR 00:23:54.375 #define SPDK_CONFIG_DEBUG 1 00:23:54.375 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:23:54.375 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:23:54.375 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:54.375 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:54.375 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:54.375 #undef SPDK_CONFIG_DPDK_UADK 00:23:54.375 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:23:54.375 #define SPDK_CONFIG_EXAMPLES 1 00:23:54.375 #undef SPDK_CONFIG_FC 00:23:54.375 #define SPDK_CONFIG_FC_PATH 00:23:54.375 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:54.375 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:54.375 #undef SPDK_CONFIG_FUSE 00:23:54.375 #undef SPDK_CONFIG_FUZZER 00:23:54.375 #define SPDK_CONFIG_FUZZER_LIB 00:23:54.375 #undef SPDK_CONFIG_GOLANG 00:23:54.375 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:54.375 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:23:54.375 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:54.375 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:23:54.375 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:54.375 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:54.375 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:54.375 #define SPDK_CONFIG_IDXD 1 00:23:54.375 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:54.375 #define SPDK_CONFIG_IPSEC_MB 1 00:23:54.375 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:23:54.375 #define SPDK_CONFIG_ISAL 1 00:23:54.375 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:54.375 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:54.375 #define SPDK_CONFIG_LIBDIR 00:23:54.375 #undef SPDK_CONFIG_LTO 00:23:54.375 #define SPDK_CONFIG_MAX_LCORES 128 00:23:54.375 #define SPDK_CONFIG_NVME_CUSE 1 00:23:54.375 #undef SPDK_CONFIG_OCF 00:23:54.375 #define SPDK_CONFIG_OCF_PATH 00:23:54.375 #define SPDK_CONFIG_OPENSSL_PATH 00:23:54.375 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:54.375 #define SPDK_CONFIG_PGO_DIR 00:23:54.375 #undef SPDK_CONFIG_PGO_USE 00:23:54.375 #define SPDK_CONFIG_PREFIX /usr/local 00:23:54.375 #undef SPDK_CONFIG_RAID5F 00:23:54.375 #undef SPDK_CONFIG_RBD 00:23:54.375 #define SPDK_CONFIG_RDMA 1 00:23:54.375 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:54.375 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:54.375 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:54.375 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:54.375 #define SPDK_CONFIG_SHARED 1 00:23:54.375 #undef SPDK_CONFIG_SMA 00:23:54.375 #define SPDK_CONFIG_TESTS 1 00:23:54.375 #undef SPDK_CONFIG_TSAN 00:23:54.375 #define SPDK_CONFIG_UBLK 1 00:23:54.375 #define SPDK_CONFIG_UBSAN 1 00:23:54.375 #undef SPDK_CONFIG_UNIT_TESTS 00:23:54.375 #undef SPDK_CONFIG_URING 00:23:54.375 #define SPDK_CONFIG_URING_PATH 00:23:54.375 #undef SPDK_CONFIG_URING_ZNS 00:23:54.375 #undef SPDK_CONFIG_USDT 00:23:54.375 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:23:54.375 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:23:54.375 #undef SPDK_CONFIG_VFIO_USER 00:23:54.375 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:54.375 #define SPDK_CONFIG_VHOST 1 00:23:54.375 #define SPDK_CONFIG_VIRTIO 1 00:23:54.375 #undef SPDK_CONFIG_VTUNE 00:23:54.375 #define SPDK_CONFIG_VTUNE_DIR 00:23:54.375 #define SPDK_CONFIG_WERROR 1 00:23:54.375 #define SPDK_CONFIG_WPDK_DIR 00:23:54.375 #undef SPDK_CONFIG_XNVME 00:23:54.375 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:54.375 10:31:19 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:54.375 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:23:54.375 10:31:19 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.375 10:31:19 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.375 10:31:19 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.375 10:31:19 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.375 10:31:19 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.375 10:31:19 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.375 10:31:19 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:23:54.376 10:31:19 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:23:54.376 10:31:19 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:23:54.376 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:54.636 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:23:54.636 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:23:54.636 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:23:54.636 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:23:54.636 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:54.637 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 1910537 ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 1910537 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.CglioS 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.CglioS/tests/interrupt /tmp/spdk.CglioS 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=50746454016 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10995843072 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12338577408 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=9883648 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=30869807104 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=1343488 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:23:54.638 * Looking for test storage... 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=50746454016 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=13210435584 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:54.638 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.639 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1910679 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:54.639 10:31:19 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1910679 /var/tmp/spdk.sock 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 1910679 ']' 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.639 10:31:19 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:23:54.639 [2024-07-15 10:31:19.288386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.639 [2024-07-15 10:31:19.288434] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910679 ] 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:54.639 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:54.639 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:54.639 [2024-07-15 10:31:19.379728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:54.899 [2024-07-15 10:31:19.455910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.899 [2024-07-15 10:31:19.455999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.899 [2024-07-15 10:31:19.456003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.899 [2024-07-15 10:31:19.519562] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:55.466 10:31:20 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.466 10:31:20 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:23:55.466 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:23:55.466 10:31:20 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:55.723 Malloc0 00:23:55.723 Malloc1 00:23:55.723 Malloc2 00:23:55.723 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:23:55.723 10:31:20 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:23:55.723 10:31:20 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:55.723 10:31:20 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:55.723 5000+0 records in 00:23:55.723 5000+0 records out 00:23:55.723 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0256493 s, 399 MB/s 00:23:55.723 10:31:20 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:23:55.980 AIO0 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 1910679 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 1910679 without_thd 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1910679 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:23:55.980 10:31:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:56.237 spdk_thread ids are 1 on reactor0. 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1910679 0 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1910679 0 idle 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:23:56.237 10:31:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910679 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.29 reactor_0' 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910679 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.29 reactor_0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1910679 1 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1910679 1 idle 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910723 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_1' 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910723 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_1 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1910679 2 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1910679 2 idle 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:56.496 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910724 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_2' 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910724 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.00 reactor_2 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:23:56.754 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:23:57.012 [2024-07-15 10:31:21.604952] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:57.012 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:57.012 [2024-07-15 10:31:21.776624] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:57.012 [2024-07-15 10:31:21.777072] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:57.269 [2024-07-15 10:31:21.956517] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:57.269 [2024-07-15 10:31:21.956631] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1910679 0 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1910679 0 busy 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:57.269 10:31:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:57.270 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:57.270 10:31:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910679 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.65 reactor_0' 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910679 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.65 reactor_0 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1910679 2 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1910679 2 busy 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910724 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.35 reactor_2' 00:23:57.527 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910724 root 20 0 128.2g 36736 24192 R 99.9 0.1 0:00.35 reactor_2 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:57.786 [2024-07-15 10:31:22.484498] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:57.786 [2024-07-15 10:31:22.484584] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1910679 2 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1910679 2 idle 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:57.786 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910724 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.52 reactor_2' 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910724 root 20 0 128.2g 36736 24192 S 0.0 0.1 0:00.52 reactor_2 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:58.044 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:58.044 [2024-07-15 10:31:22.828504] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:58.044 [2024-07-15 10:31:22.828594] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:58.302 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:23:58.302 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:23:58.302 10:31:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:23:58.302 [2024-07-15 10:31:22.988748] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1910679 0 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1910679 0 idle 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1910679 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:23:58.302 10:31:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1910679 -w 256 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1910679 root 20 0 128.2g 36736 24192 S 6.7 0.1 0:01.35 reactor_0' 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1910679 root 20 0 128.2g 36736 24192 S 6.7 0.1 0:01.35 reactor_0 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:23:58.560 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 1910679 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 1910679 ']' 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 1910679 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1910679 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1910679' 00:23:58.561 killing process with pid 1910679 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 1910679 00:23:58.561 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 1910679 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1911342 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:58.818 10:31:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1911342 /var/tmp/spdk.sock 00:23:58.818 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 1911342 ']' 00:23:58.818 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.818 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.818 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.818 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.818 10:31:23 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:23:58.818 [2024-07-15 10:31:23.478130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:58.818 [2024-07-15 10:31:23.478180] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911342 ] 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:58.818 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.818 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:58.819 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:58.819 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:58.819 [2024-07-15 10:31:23.569362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:59.078 [2024-07-15 10:31:23.649922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.078 [2024-07-15 10:31:23.649943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.078 [2024-07-15 10:31:23.649946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.078 [2024-07-15 10:31:23.714111] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:59.646 10:31:24 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.646 10:31:24 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:23:59.646 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:23:59.646 10:31:24 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:59.905 Malloc0 00:23:59.905 Malloc1 00:23:59.905 Malloc2 00:23:59.905 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:23:59.905 10:31:24 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:23:59.905 10:31:24 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:59.905 10:31:24 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:59.905 5000+0 records in 00:23:59.905 5000+0 records out 00:23:59.905 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0264585 s, 387 MB/s 00:23:59.905 10:31:24 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:00.164 AIO0 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 1911342 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 1911342 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1911342 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:00.164 10:31:24 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:24:00.165 10:31:24 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:00.425 spdk_thread ids are 1 on reactor0. 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1911342 0 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1911342 0 idle 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:00.425 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911342 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.30 reactor_0' 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911342 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.30 reactor_0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1911342 1 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1911342 1 idle 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911345 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911345 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1911342 2 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1911342 2 idle 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:00.684 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911346 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911346 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:00.943 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:01.202 [2024-07-15 10:31:25.782422] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:01.202 [2024-07-15 10:31:25.782527] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:01.202 [2024-07-15 10:31:25.782584] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:01.202 [2024-07-15 10:31:25.950815] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:01.202 [2024-07-15 10:31:25.950896] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1911342 0 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1911342 0 busy 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:01.202 10:31:25 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911342 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.65 reactor_0' 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911342 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.65 reactor_0 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:01.461 10:31:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1911342 2 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1911342 2 busy 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:01.462 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911346 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2' 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911346 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:01.720 [2024-07-15 10:31:26.480269] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:01.720 [2024-07-15 10:31:26.480343] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1911342 2 00:24:01.720 10:31:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1911342 2 idle 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:01.721 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911346 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.52 reactor_2' 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911346 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.52 reactor_2 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:01.979 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:02.265 [2024-07-15 10:31:26.833162] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:02.265 [2024-07-15 10:31:26.833268] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:02.265 [2024-07-15 10:31:26.833284] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1911342 0 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1911342 0 idle 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1911342 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1911342 -w 256 00:24:02.265 10:31:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:24:02.265 10:31:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1911342 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.35 reactor_0' 00:24:02.265 10:31:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1911342 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.35 reactor_0 00:24:02.265 10:31:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:24:02.265 10:31:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 1911342 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 1911342 ']' 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 1911342 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1911342 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1911342' 00:24:02.525 killing process with pid 1911342 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 1911342 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 1911342 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:02.525 10:31:27 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:02.525 00:24:02.525 real 0m8.329s 00:24:02.525 user 0m7.272s 00:24:02.525 sys 0m1.825s 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.525 10:31:27 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:02.525 ************************************ 00:24:02.525 END TEST reactor_set_interrupt 00:24:02.525 ************************************ 00:24:02.786 10:31:27 -- common/autotest_common.sh@1142 -- # return 0 00:24:02.786 10:31:27 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:02.786 10:31:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:02.786 10:31:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.786 10:31:27 -- common/autotest_common.sh@10 -- # set +x 00:24:02.786 ************************************ 00:24:02.786 START TEST reap_unregistered_poller 00:24:02.786 ************************************ 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:02.786 * Looking for test storage... 00:24:02.786 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:02.786 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:02.786 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:02.786 10:31:27 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:02.787 10:31:27 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:02.787 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:02.787 #define SPDK_CONFIG_H 00:24:02.787 #define SPDK_CONFIG_APPS 1 00:24:02.787 #define SPDK_CONFIG_ARCH native 00:24:02.787 #undef SPDK_CONFIG_ASAN 00:24:02.787 #undef SPDK_CONFIG_AVAHI 00:24:02.787 #undef SPDK_CONFIG_CET 00:24:02.787 #define SPDK_CONFIG_COVERAGE 1 00:24:02.787 #define SPDK_CONFIG_CROSS_PREFIX 00:24:02.787 #define SPDK_CONFIG_CRYPTO 1 00:24:02.787 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:24:02.787 #undef SPDK_CONFIG_CUSTOMOCF 00:24:02.787 #undef SPDK_CONFIG_DAOS 00:24:02.787 #define SPDK_CONFIG_DAOS_DIR 00:24:02.787 #define SPDK_CONFIG_DEBUG 1 00:24:02.787 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:24:02.787 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:24:02.787 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:02.787 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:02.787 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:02.787 #undef SPDK_CONFIG_DPDK_UADK 00:24:02.787 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:24:02.787 #define SPDK_CONFIG_EXAMPLES 1 00:24:02.787 #undef SPDK_CONFIG_FC 00:24:02.787 #define SPDK_CONFIG_FC_PATH 00:24:02.787 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:02.787 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:02.787 #undef SPDK_CONFIG_FUSE 00:24:02.787 #undef SPDK_CONFIG_FUZZER 00:24:02.787 #define SPDK_CONFIG_FUZZER_LIB 00:24:02.787 #undef SPDK_CONFIG_GOLANG 00:24:02.787 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:02.787 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:02.787 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:02.787 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:02.787 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:02.787 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:02.787 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:02.787 #define SPDK_CONFIG_IDXD 1 00:24:02.787 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:02.787 #define SPDK_CONFIG_IPSEC_MB 1 00:24:02.787 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:24:02.787 #define SPDK_CONFIG_ISAL 1 00:24:02.787 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:02.787 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:02.787 #define SPDK_CONFIG_LIBDIR 00:24:02.787 #undef SPDK_CONFIG_LTO 00:24:02.787 #define SPDK_CONFIG_MAX_LCORES 128 00:24:02.787 #define SPDK_CONFIG_NVME_CUSE 1 00:24:02.787 #undef SPDK_CONFIG_OCF 00:24:02.787 #define SPDK_CONFIG_OCF_PATH 00:24:02.787 #define SPDK_CONFIG_OPENSSL_PATH 00:24:02.787 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:02.787 #define SPDK_CONFIG_PGO_DIR 00:24:02.787 #undef SPDK_CONFIG_PGO_USE 00:24:02.787 #define SPDK_CONFIG_PREFIX /usr/local 00:24:02.787 #undef SPDK_CONFIG_RAID5F 00:24:02.787 #undef SPDK_CONFIG_RBD 00:24:02.787 #define SPDK_CONFIG_RDMA 1 00:24:02.787 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:02.787 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:02.787 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:02.787 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:02.787 #define SPDK_CONFIG_SHARED 1 00:24:02.787 #undef SPDK_CONFIG_SMA 00:24:02.787 #define SPDK_CONFIG_TESTS 1 00:24:02.787 #undef SPDK_CONFIG_TSAN 00:24:02.787 #define SPDK_CONFIG_UBLK 1 00:24:02.787 #define SPDK_CONFIG_UBSAN 1 00:24:02.787 #undef SPDK_CONFIG_UNIT_TESTS 00:24:02.787 #undef SPDK_CONFIG_URING 00:24:02.787 #define SPDK_CONFIG_URING_PATH 00:24:02.787 #undef SPDK_CONFIG_URING_ZNS 00:24:02.787 #undef SPDK_CONFIG_USDT 00:24:02.787 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:24:02.787 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:24:02.787 #undef SPDK_CONFIG_VFIO_USER 00:24:02.787 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:02.787 #define SPDK_CONFIG_VHOST 1 00:24:02.787 #define SPDK_CONFIG_VIRTIO 1 00:24:02.787 #undef SPDK_CONFIG_VTUNE 00:24:02.787 #define SPDK_CONFIG_VTUNE_DIR 00:24:02.787 #define SPDK_CONFIG_WERROR 1 00:24:02.787 #define SPDK_CONFIG_WPDK_DIR 00:24:02.787 #undef SPDK_CONFIG_XNVME 00:24:02.787 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:02.787 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:02.787 10:31:27 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.787 10:31:27 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.787 10:31:27 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.787 10:31:27 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.787 10:31:27 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:24:02.787 10:31:27 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.787 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:02.787 10:31:27 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:24:02.788 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:03.048 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 1 -eq 1 ]] 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@276 -- # export HUGE_EVEN_ALLOC=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@276 -- # HUGE_EVEN_ALLOC=yes 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 1912233 ]] 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 1912233 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.skiIRG 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:24:03.049 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.skiIRG/tests/interrupt /tmp/spdk.skiIRG 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=954302464 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4330127360 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=50746290176 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742297088 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10996006912 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30866337792 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871146496 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4808704 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12338577408 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348461056 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=9883648 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=30869807104 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871150592 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=1343488 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6174224384 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174228480 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:24:03.050 * Looking for test storage... 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=50746290176 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=13210599424 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:03.050 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1912274 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:03.050 10:31:27 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1912274 /var/tmp/spdk.sock 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 1912274 ']' 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.050 10:31:27 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:03.050 [2024-07-15 10:31:27.707863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:03.050 [2024-07-15 10:31:27.707914] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912274 ] 00:24:03.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.050 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:03.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.050 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:03.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.050 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:03.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:03.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:03.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:03.051 [2024-07-15 10:31:27.800783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:03.309 [2024-07-15 10:31:27.876580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.309 [2024-07-15 10:31:27.876596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.309 [2024-07-15 10:31:27.876599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.309 [2024-07-15 10:31:27.940323] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:03.877 10:31:28 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.877 10:31:28 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:03.877 10:31:28 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.877 10:31:28 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:03.877 10:31:28 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:03.877 "name": "app_thread", 00:24:03.877 "id": 1, 00:24:03.877 "active_pollers": [], 00:24:03.877 "timed_pollers": [ 00:24:03.877 { 00:24:03.877 "name": "rpc_subsystem_poll_servers", 00:24:03.877 "id": 1, 00:24:03.877 "state": "waiting", 00:24:03.877 "run_count": 0, 00:24:03.877 "busy_count": 0, 00:24:03.877 "period_ticks": 10000000 00:24:03.877 } 00:24:03.877 ], 00:24:03.877 "paused_pollers": [] 00:24:03.877 }' 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:03.877 5000+0 records in 00:24:03.877 5000+0 records out 00:24:03.877 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0269788 s, 380 MB/s 00:24:03.877 10:31:28 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:24:04.136 AIO0 00:24:04.136 10:31:28 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:04.394 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:04.394 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:04.394 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:04.394 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.394 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:04.394 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:04.653 "name": "app_thread", 00:24:04.653 "id": 1, 00:24:04.653 "active_pollers": [], 00:24:04.653 "timed_pollers": [ 00:24:04.653 { 00:24:04.653 "name": "rpc_subsystem_poll_servers", 00:24:04.653 "id": 1, 00:24:04.653 "state": "waiting", 00:24:04.653 "run_count": 0, 00:24:04.653 "busy_count": 0, 00:24:04.653 "period_ticks": 10000000 00:24:04.653 } 00:24:04.653 ], 00:24:04.653 "paused_pollers": [] 00:24:04.653 }' 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:04.653 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 1912274 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 1912274 ']' 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 1912274 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1912274 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1912274' 00:24:04.653 killing process with pid 1912274 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 1912274 00:24:04.653 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 1912274 00:24:04.911 10:31:29 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:04.911 10:31:29 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:24:04.911 00:24:04.911 real 0m2.128s 00:24:04.911 user 0m1.182s 00:24:04.911 sys 0m0.595s 00:24:04.911 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.911 10:31:29 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:24:04.911 ************************************ 00:24:04.911 END TEST reap_unregistered_poller 00:24:04.911 ************************************ 00:24:04.911 10:31:29 -- common/autotest_common.sh@1142 -- # return 0 00:24:04.911 10:31:29 -- spdk/autotest.sh@198 -- # uname -s 00:24:04.911 10:31:29 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:24:04.911 10:31:29 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:24:04.911 10:31:29 -- spdk/autotest.sh@205 -- # [[ 1 -eq 0 ]] 00:24:04.911 10:31:29 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:04.911 10:31:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.911 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:24:04.911 10:31:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@347 -- # '[' 1 -eq 1 ']' 00:24:04.911 10:31:29 -- spdk/autotest.sh@348 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:04.911 10:31:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.911 10:31:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.911 10:31:29 -- common/autotest_common.sh@10 -- # set +x 00:24:04.912 ************************************ 00:24:04.912 START TEST compress_compdev 00:24:04.912 ************************************ 00:24:04.912 10:31:29 compress_compdev -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:24:05.170 * Looking for test storage... 00:24:05.170 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:05.170 10:31:29 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:24:05.170 10:31:29 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:05.171 10:31:29 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.171 10:31:29 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.171 10:31:29 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.171 10:31:29 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.171 10:31:29 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.171 10:31:29 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.171 10:31:29 compress_compdev -- paths/export.sh@5 -- # export PATH 00:24:05.171 10:31:29 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.171 10:31:29 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1912638 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1912638 00:24:05.171 10:31:29 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:05.171 10:31:29 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1912638 ']' 00:24:05.171 10:31:29 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.171 10:31:29 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.171 10:31:29 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.171 10:31:29 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.171 10:31:29 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:05.171 [2024-07-15 10:31:29.845168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:05.171 [2024-07-15 10:31:29.845218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912638 ] 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:05.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:05.171 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:05.171 [2024-07-15 10:31:29.937210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:05.430 [2024-07-15 10:31:30.013628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.430 [2024-07-15 10:31:30.013632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.998 [2024-07-15 10:31:30.538756] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:05.998 10:31:30 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.998 10:31:30 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:05.998 10:31:30 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:24:05.998 10:31:30 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:05.998 10:31:30 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:09.283 [2024-07-15 10:31:33.661035] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ef2f00 PMD being used: compress_qat 00:24:09.283 10:31:33 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:09.283 10:31:33 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:09.283 [ 00:24:09.283 { 00:24:09.283 "name": "Nvme0n1", 00:24:09.283 "aliases": [ 00:24:09.283 "899e4099-b6cb-4d60-8bec-6a5cc226b7c2" 00:24:09.283 ], 00:24:09.283 "product_name": "NVMe disk", 00:24:09.283 "block_size": 512, 00:24:09.283 "num_blocks": 3907029168, 00:24:09.283 "uuid": "899e4099-b6cb-4d60-8bec-6a5cc226b7c2", 00:24:09.283 "assigned_rate_limits": { 00:24:09.283 "rw_ios_per_sec": 0, 00:24:09.283 "rw_mbytes_per_sec": 0, 00:24:09.283 "r_mbytes_per_sec": 0, 00:24:09.283 "w_mbytes_per_sec": 0 00:24:09.283 }, 00:24:09.283 "claimed": false, 00:24:09.283 "zoned": false, 00:24:09.283 "supported_io_types": { 00:24:09.283 "read": true, 00:24:09.283 "write": true, 00:24:09.283 "unmap": true, 00:24:09.283 "flush": true, 00:24:09.283 "reset": true, 00:24:09.283 "nvme_admin": true, 00:24:09.283 "nvme_io": true, 00:24:09.283 "nvme_io_md": false, 00:24:09.283 "write_zeroes": true, 00:24:09.283 "zcopy": false, 00:24:09.283 "get_zone_info": false, 00:24:09.283 "zone_management": false, 00:24:09.283 "zone_append": false, 00:24:09.283 "compare": false, 00:24:09.283 "compare_and_write": false, 00:24:09.283 "abort": true, 00:24:09.283 "seek_hole": false, 00:24:09.283 "seek_data": false, 00:24:09.283 "copy": false, 00:24:09.283 "nvme_iov_md": false 00:24:09.283 }, 00:24:09.283 "driver_specific": { 00:24:09.283 "nvme": [ 00:24:09.283 { 00:24:09.283 "pci_address": "0000:d8:00.0", 00:24:09.283 "trid": { 00:24:09.283 "trtype": "PCIe", 00:24:09.283 "traddr": "0000:d8:00.0" 00:24:09.283 }, 00:24:09.283 "ctrlr_data": { 00:24:09.283 "cntlid": 0, 00:24:09.283 "vendor_id": "0x8086", 00:24:09.283 "model_number": "INTEL SSDPE2KX020T8", 00:24:09.283 "serial_number": "BTLJ125505KA2P0BGN", 00:24:09.283 "firmware_revision": "VDV10170", 00:24:09.283 "oacs": { 00:24:09.283 "security": 0, 00:24:09.283 "format": 1, 00:24:09.283 "firmware": 1, 00:24:09.283 "ns_manage": 1 00:24:09.283 }, 00:24:09.283 "multi_ctrlr": false, 00:24:09.283 "ana_reporting": false 00:24:09.283 }, 00:24:09.283 "vs": { 00:24:09.283 "nvme_version": "1.2" 00:24:09.283 }, 00:24:09.283 "ns_data": { 00:24:09.283 "id": 1, 00:24:09.283 "can_share": false 00:24:09.283 } 00:24:09.283 } 00:24:09.283 ], 00:24:09.283 "mp_policy": "active_passive" 00:24:09.283 } 00:24:09.283 } 00:24:09.283 ] 00:24:09.283 10:31:34 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:09.283 10:31:34 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:09.541 [2024-07-15 10:31:34.164669] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d42a10 PMD being used: compress_qat 00:24:10.474 a85c5d6f-a773-42a7-ad6d-8131449ba702 00:24:10.475 10:31:35 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:10.732 b9a09be4-b40f-4a34-aaf6-a3ed0423c017 00:24:10.732 10:31:35 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:10.732 10:31:35 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:10.990 [ 00:24:10.990 { 00:24:10.990 "name": "b9a09be4-b40f-4a34-aaf6-a3ed0423c017", 00:24:10.990 "aliases": [ 00:24:10.990 "lvs0/lv0" 00:24:10.990 ], 00:24:10.990 "product_name": "Logical Volume", 00:24:10.990 "block_size": 512, 00:24:10.990 "num_blocks": 204800, 00:24:10.990 "uuid": "b9a09be4-b40f-4a34-aaf6-a3ed0423c017", 00:24:10.990 "assigned_rate_limits": { 00:24:10.990 "rw_ios_per_sec": 0, 00:24:10.990 "rw_mbytes_per_sec": 0, 00:24:10.990 "r_mbytes_per_sec": 0, 00:24:10.990 "w_mbytes_per_sec": 0 00:24:10.990 }, 00:24:10.990 "claimed": false, 00:24:10.990 "zoned": false, 00:24:10.990 "supported_io_types": { 00:24:10.990 "read": true, 00:24:10.990 "write": true, 00:24:10.990 "unmap": true, 00:24:10.990 "flush": false, 00:24:10.990 "reset": true, 00:24:10.990 "nvme_admin": false, 00:24:10.990 "nvme_io": false, 00:24:10.990 "nvme_io_md": false, 00:24:10.990 "write_zeroes": true, 00:24:10.990 "zcopy": false, 00:24:10.990 "get_zone_info": false, 00:24:10.990 "zone_management": false, 00:24:10.990 "zone_append": false, 00:24:10.990 "compare": false, 00:24:10.990 "compare_and_write": false, 00:24:10.990 "abort": false, 00:24:10.990 "seek_hole": true, 00:24:10.990 "seek_data": true, 00:24:10.990 "copy": false, 00:24:10.990 "nvme_iov_md": false 00:24:10.990 }, 00:24:10.990 "driver_specific": { 00:24:10.990 "lvol": { 00:24:10.990 "lvol_store_uuid": "a85c5d6f-a773-42a7-ad6d-8131449ba702", 00:24:10.990 "base_bdev": "Nvme0n1", 00:24:10.990 "thin_provision": true, 00:24:10.990 "num_allocated_clusters": 0, 00:24:10.990 "snapshot": false, 00:24:10.990 "clone": false, 00:24:10.990 "esnap_clone": false 00:24:10.990 } 00:24:10.990 } 00:24:10.990 } 00:24:10.990 ] 00:24:10.990 10:31:35 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:10.990 10:31:35 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:10.990 10:31:35 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:11.248 [2024-07-15 10:31:35.822593] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:11.248 COMP_lvs0/lv0 00:24:11.248 10:31:35 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:11.248 10:31:35 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:11.506 [ 00:24:11.506 { 00:24:11.506 "name": "COMP_lvs0/lv0", 00:24:11.506 "aliases": [ 00:24:11.506 "aac642be-e888-5232-b371-c186c427e150" 00:24:11.506 ], 00:24:11.506 "product_name": "compress", 00:24:11.506 "block_size": 512, 00:24:11.506 "num_blocks": 200704, 00:24:11.506 "uuid": "aac642be-e888-5232-b371-c186c427e150", 00:24:11.506 "assigned_rate_limits": { 00:24:11.506 "rw_ios_per_sec": 0, 00:24:11.506 "rw_mbytes_per_sec": 0, 00:24:11.506 "r_mbytes_per_sec": 0, 00:24:11.506 "w_mbytes_per_sec": 0 00:24:11.506 }, 00:24:11.506 "claimed": false, 00:24:11.506 "zoned": false, 00:24:11.506 "supported_io_types": { 00:24:11.506 "read": true, 00:24:11.506 "write": true, 00:24:11.506 "unmap": false, 00:24:11.506 "flush": false, 00:24:11.506 "reset": false, 00:24:11.506 "nvme_admin": false, 00:24:11.506 "nvme_io": false, 00:24:11.506 "nvme_io_md": false, 00:24:11.506 "write_zeroes": true, 00:24:11.506 "zcopy": false, 00:24:11.506 "get_zone_info": false, 00:24:11.506 "zone_management": false, 00:24:11.506 "zone_append": false, 00:24:11.506 "compare": false, 00:24:11.506 "compare_and_write": false, 00:24:11.506 "abort": false, 00:24:11.506 "seek_hole": false, 00:24:11.506 "seek_data": false, 00:24:11.506 "copy": false, 00:24:11.506 "nvme_iov_md": false 00:24:11.506 }, 00:24:11.506 "driver_specific": { 00:24:11.506 "compress": { 00:24:11.506 "name": "COMP_lvs0/lv0", 00:24:11.506 "base_bdev_name": "b9a09be4-b40f-4a34-aaf6-a3ed0423c017" 00:24:11.506 } 00:24:11.506 } 00:24:11.506 } 00:24:11.506 ] 00:24:11.506 10:31:36 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:11.506 10:31:36 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:11.506 [2024-07-15 10:31:36.220423] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f49e81b15c0 PMD being used: compress_qat 00:24:11.506 [2024-07-15 10:31:36.222027] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1ef0360 PMD being used: compress_qat 00:24:11.506 Running I/O for 3 seconds... 00:24:14.821 00:24:14.821 Latency(us) 00:24:14.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.821 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:14.821 Verification LBA range: start 0x0 length 0x3100 00:24:14.821 COMP_lvs0/lv0 : 3.01 4067.63 15.89 0.00 0.00 7825.54 129.43 15414.07 00:24:14.821 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:14.821 Verification LBA range: start 0x3100 length 0x3100 00:24:14.821 COMP_lvs0/lv0 : 3.01 4187.52 16.36 0.00 0.00 7601.36 118.78 15309.21 00:24:14.821 =================================================================================================================== 00:24:14.821 Total : 8255.14 32.25 0.00 0.00 7711.84 118.78 15414.07 00:24:14.821 0 00:24:14.821 10:31:39 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:14.821 10:31:39 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:14.821 10:31:39 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:14.821 10:31:39 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:14.821 10:31:39 compress_compdev -- compress/compress.sh@78 -- # killprocess 1912638 00:24:14.821 10:31:39 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1912638 ']' 00:24:14.821 10:31:39 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1912638 00:24:14.821 10:31:39 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:14.821 10:31:39 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.079 10:31:39 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1912638 00:24:15.079 10:31:39 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.079 10:31:39 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.079 10:31:39 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1912638' 00:24:15.079 killing process with pid 1912638 00:24:15.079 10:31:39 compress_compdev -- common/autotest_common.sh@967 -- # kill 1912638 00:24:15.079 Received shutdown signal, test time was about 3.000000 seconds 00:24:15.079 00:24:15.079 Latency(us) 00:24:15.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.079 =================================================================================================================== 00:24:15.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.079 10:31:39 compress_compdev -- common/autotest_common.sh@972 -- # wait 1912638 00:24:17.609 10:31:42 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:24:17.609 10:31:42 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:17.609 10:31:42 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1914767 00:24:17.609 10:31:42 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.609 10:31:42 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:17.609 10:31:42 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1914767 00:24:17.609 10:31:42 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1914767 ']' 00:24:17.609 10:31:42 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.609 10:31:42 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.609 10:31:42 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.609 10:31:42 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.609 10:31:42 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:17.609 [2024-07-15 10:31:42.102755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:17.609 [2024-07-15 10:31:42.102805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914767 ] 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:17.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:17.609 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:17.609 [2024-07-15 10:31:42.193385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:17.609 [2024-07-15 10:31:42.259547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.609 [2024-07-15 10:31:42.259550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.177 [2024-07-15 10:31:42.759313] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:18.177 10:31:42 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.177 10:31:42 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:18.177 10:31:42 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:24:18.177 10:31:42 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:18.177 10:31:42 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:21.466 [2024-07-15 10:31:45.917295] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1381f00 PMD being used: compress_qat 00:24:21.466 10:31:45 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:21.466 10:31:45 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:21.466 10:31:45 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:21.466 10:31:45 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:21.466 10:31:45 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:21.466 10:31:45 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:21.466 10:31:45 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:21.466 10:31:46 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:21.726 [ 00:24:21.726 { 00:24:21.726 "name": "Nvme0n1", 00:24:21.726 "aliases": [ 00:24:21.726 "58652f16-b768-4956-bb64-c57d73d52a1f" 00:24:21.726 ], 00:24:21.726 "product_name": "NVMe disk", 00:24:21.726 "block_size": 512, 00:24:21.726 "num_blocks": 3907029168, 00:24:21.726 "uuid": "58652f16-b768-4956-bb64-c57d73d52a1f", 00:24:21.726 "assigned_rate_limits": { 00:24:21.726 "rw_ios_per_sec": 0, 00:24:21.726 "rw_mbytes_per_sec": 0, 00:24:21.726 "r_mbytes_per_sec": 0, 00:24:21.726 "w_mbytes_per_sec": 0 00:24:21.726 }, 00:24:21.726 "claimed": false, 00:24:21.726 "zoned": false, 00:24:21.726 "supported_io_types": { 00:24:21.726 "read": true, 00:24:21.726 "write": true, 00:24:21.726 "unmap": true, 00:24:21.726 "flush": true, 00:24:21.726 "reset": true, 00:24:21.726 "nvme_admin": true, 00:24:21.726 "nvme_io": true, 00:24:21.726 "nvme_io_md": false, 00:24:21.726 "write_zeroes": true, 00:24:21.726 "zcopy": false, 00:24:21.726 "get_zone_info": false, 00:24:21.726 "zone_management": false, 00:24:21.726 "zone_append": false, 00:24:21.726 "compare": false, 00:24:21.726 "compare_and_write": false, 00:24:21.726 "abort": true, 00:24:21.726 "seek_hole": false, 00:24:21.726 "seek_data": false, 00:24:21.726 "copy": false, 00:24:21.726 "nvme_iov_md": false 00:24:21.726 }, 00:24:21.726 "driver_specific": { 00:24:21.726 "nvme": [ 00:24:21.726 { 00:24:21.726 "pci_address": "0000:d8:00.0", 00:24:21.726 "trid": { 00:24:21.726 "trtype": "PCIe", 00:24:21.726 "traddr": "0000:d8:00.0" 00:24:21.726 }, 00:24:21.726 "ctrlr_data": { 00:24:21.726 "cntlid": 0, 00:24:21.726 "vendor_id": "0x8086", 00:24:21.726 "model_number": "INTEL SSDPE2KX020T8", 00:24:21.726 "serial_number": "BTLJ125505KA2P0BGN", 00:24:21.726 "firmware_revision": "VDV10170", 00:24:21.726 "oacs": { 00:24:21.726 "security": 0, 00:24:21.726 "format": 1, 00:24:21.726 "firmware": 1, 00:24:21.726 "ns_manage": 1 00:24:21.726 }, 00:24:21.726 "multi_ctrlr": false, 00:24:21.726 "ana_reporting": false 00:24:21.726 }, 00:24:21.726 "vs": { 00:24:21.726 "nvme_version": "1.2" 00:24:21.726 }, 00:24:21.726 "ns_data": { 00:24:21.726 "id": 1, 00:24:21.726 "can_share": false 00:24:21.726 } 00:24:21.726 } 00:24:21.726 ], 00:24:21.726 "mp_policy": "active_passive" 00:24:21.726 } 00:24:21.726 } 00:24:21.726 ] 00:24:21.726 10:31:46 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:21.726 10:31:46 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:21.726 [2024-07-15 10:31:46.445072] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x11d1a10 PMD being used: compress_qat 00:24:23.101 c510296f-a70c-41e7-9018-3a00497c9422 00:24:23.101 10:31:47 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:23.101 267fb0e9-3a77-40c4-b146-cadcd5532b5d 00:24:23.101 10:31:47 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:23.101 10:31:47 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:23.360 [ 00:24:23.360 { 00:24:23.360 "name": "267fb0e9-3a77-40c4-b146-cadcd5532b5d", 00:24:23.360 "aliases": [ 00:24:23.360 "lvs0/lv0" 00:24:23.360 ], 00:24:23.360 "product_name": "Logical Volume", 00:24:23.360 "block_size": 512, 00:24:23.360 "num_blocks": 204800, 00:24:23.360 "uuid": "267fb0e9-3a77-40c4-b146-cadcd5532b5d", 00:24:23.360 "assigned_rate_limits": { 00:24:23.360 "rw_ios_per_sec": 0, 00:24:23.360 "rw_mbytes_per_sec": 0, 00:24:23.360 "r_mbytes_per_sec": 0, 00:24:23.360 "w_mbytes_per_sec": 0 00:24:23.360 }, 00:24:23.360 "claimed": false, 00:24:23.360 "zoned": false, 00:24:23.360 "supported_io_types": { 00:24:23.360 "read": true, 00:24:23.360 "write": true, 00:24:23.360 "unmap": true, 00:24:23.360 "flush": false, 00:24:23.360 "reset": true, 00:24:23.360 "nvme_admin": false, 00:24:23.360 "nvme_io": false, 00:24:23.360 "nvme_io_md": false, 00:24:23.360 "write_zeroes": true, 00:24:23.360 "zcopy": false, 00:24:23.360 "get_zone_info": false, 00:24:23.360 "zone_management": false, 00:24:23.360 "zone_append": false, 00:24:23.360 "compare": false, 00:24:23.360 "compare_and_write": false, 00:24:23.360 "abort": false, 00:24:23.360 "seek_hole": true, 00:24:23.360 "seek_data": true, 00:24:23.360 "copy": false, 00:24:23.360 "nvme_iov_md": false 00:24:23.360 }, 00:24:23.360 "driver_specific": { 00:24:23.360 "lvol": { 00:24:23.360 "lvol_store_uuid": "c510296f-a70c-41e7-9018-3a00497c9422", 00:24:23.360 "base_bdev": "Nvme0n1", 00:24:23.360 "thin_provision": true, 00:24:23.360 "num_allocated_clusters": 0, 00:24:23.360 "snapshot": false, 00:24:23.360 "clone": false, 00:24:23.360 "esnap_clone": false 00:24:23.360 } 00:24:23.360 } 00:24:23.360 } 00:24:23.360 ] 00:24:23.360 10:31:48 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:23.360 10:31:48 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:24:23.360 10:31:48 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:24:23.619 [2024-07-15 10:31:48.187862] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:23.619 COMP_lvs0/lv0 00:24:23.619 10:31:48 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:23.619 10:31:48 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:23.878 [ 00:24:23.878 { 00:24:23.878 "name": "COMP_lvs0/lv0", 00:24:23.878 "aliases": [ 00:24:23.878 "f5588c05-ca12-51ec-99e7-80ecf5dd77ad" 00:24:23.878 ], 00:24:23.878 "product_name": "compress", 00:24:23.878 "block_size": 512, 00:24:23.878 "num_blocks": 200704, 00:24:23.878 "uuid": "f5588c05-ca12-51ec-99e7-80ecf5dd77ad", 00:24:23.878 "assigned_rate_limits": { 00:24:23.878 "rw_ios_per_sec": 0, 00:24:23.878 "rw_mbytes_per_sec": 0, 00:24:23.878 "r_mbytes_per_sec": 0, 00:24:23.878 "w_mbytes_per_sec": 0 00:24:23.878 }, 00:24:23.878 "claimed": false, 00:24:23.878 "zoned": false, 00:24:23.878 "supported_io_types": { 00:24:23.878 "read": true, 00:24:23.878 "write": true, 00:24:23.878 "unmap": false, 00:24:23.878 "flush": false, 00:24:23.878 "reset": false, 00:24:23.878 "nvme_admin": false, 00:24:23.878 "nvme_io": false, 00:24:23.878 "nvme_io_md": false, 00:24:23.878 "write_zeroes": true, 00:24:23.878 "zcopy": false, 00:24:23.878 "get_zone_info": false, 00:24:23.878 "zone_management": false, 00:24:23.878 "zone_append": false, 00:24:23.878 "compare": false, 00:24:23.878 "compare_and_write": false, 00:24:23.878 "abort": false, 00:24:23.878 "seek_hole": false, 00:24:23.878 "seek_data": false, 00:24:23.878 "copy": false, 00:24:23.878 "nvme_iov_md": false 00:24:23.878 }, 00:24:23.878 "driver_specific": { 00:24:23.878 "compress": { 00:24:23.878 "name": "COMP_lvs0/lv0", 00:24:23.878 "base_bdev_name": "267fb0e9-3a77-40c4-b146-cadcd5532b5d" 00:24:23.878 } 00:24:23.878 } 00:24:23.878 } 00:24:23.878 ] 00:24:23.878 10:31:48 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:23.878 10:31:48 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:23.878 [2024-07-15 10:31:48.629799] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fcc201b15c0 PMD being used: compress_qat 00:24:23.879 [2024-07-15 10:31:48.631469] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x137f3d0 PMD being used: compress_qat 00:24:23.879 Running I/O for 3 seconds... 00:24:27.159 00:24:27.159 Latency(us) 00:24:27.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.159 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:27.159 Verification LBA range: start 0x0 length 0x3100 00:24:27.159 COMP_lvs0/lv0 : 3.01 4196.55 16.39 0.00 0.00 7580.92 127.80 13631.49 00:24:27.159 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:27.159 Verification LBA range: start 0x3100 length 0x3100 00:24:27.159 COMP_lvs0/lv0 : 3.01 4303.86 16.81 0.00 0.00 7400.39 121.24 12740.20 00:24:27.159 =================================================================================================================== 00:24:27.159 Total : 8500.40 33.20 0.00 0.00 7489.60 121.24 13631.49 00:24:27.159 0 00:24:27.159 10:31:51 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:27.159 10:31:51 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:27.159 10:31:51 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:27.418 10:31:51 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:27.418 10:31:51 compress_compdev -- compress/compress.sh@78 -- # killprocess 1914767 00:24:27.418 10:31:51 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1914767 ']' 00:24:27.418 10:31:51 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1914767 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1914767 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1914767' 00:24:27.418 killing process with pid 1914767 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@967 -- # kill 1914767 00:24:27.418 Received shutdown signal, test time was about 3.000000 seconds 00:24:27.418 00:24:27.418 Latency(us) 00:24:27.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.418 =================================================================================================================== 00:24:27.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.418 10:31:52 compress_compdev -- common/autotest_common.sh@972 -- # wait 1914767 00:24:29.950 10:31:54 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:24:29.950 10:31:54 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:29.950 10:31:54 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1916870 00:24:29.950 10:31:54 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.950 10:31:54 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:24:29.950 10:31:54 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1916870 00:24:29.950 10:31:54 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1916870 ']' 00:24:29.950 10:31:54 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.950 10:31:54 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.950 10:31:54 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.950 10:31:54 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.950 10:31:54 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:29.950 [2024-07-15 10:31:54.553948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:29.950 [2024-07-15 10:31:54.553999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916870 ] 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.950 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:29.950 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.951 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:29.951 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.951 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:29.951 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:29.951 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:29.951 [2024-07-15 10:31:54.646573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:29.951 [2024-07-15 10:31:54.716106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.951 [2024-07-15 10:31:54.716109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.517 [2024-07-15 10:31:55.220682] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:30.775 10:31:55 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:30.775 10:31:55 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:30.775 10:31:55 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:24:30.775 10:31:55 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:30.775 10:31:55 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:34.113 [2024-07-15 10:31:58.373243] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x240ef00 PMD being used: compress_qat 00:24:34.113 10:31:58 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:34.113 10:31:58 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:34.113 [ 00:24:34.113 { 00:24:34.113 "name": "Nvme0n1", 00:24:34.113 "aliases": [ 00:24:34.113 "8f37dc6b-89ad-4446-ad8d-744f1e27e2fe" 00:24:34.113 ], 00:24:34.113 "product_name": "NVMe disk", 00:24:34.113 "block_size": 512, 00:24:34.113 "num_blocks": 3907029168, 00:24:34.113 "uuid": "8f37dc6b-89ad-4446-ad8d-744f1e27e2fe", 00:24:34.113 "assigned_rate_limits": { 00:24:34.113 "rw_ios_per_sec": 0, 00:24:34.113 "rw_mbytes_per_sec": 0, 00:24:34.113 "r_mbytes_per_sec": 0, 00:24:34.113 "w_mbytes_per_sec": 0 00:24:34.113 }, 00:24:34.113 "claimed": false, 00:24:34.113 "zoned": false, 00:24:34.113 "supported_io_types": { 00:24:34.113 "read": true, 00:24:34.113 "write": true, 00:24:34.113 "unmap": true, 00:24:34.113 "flush": true, 00:24:34.113 "reset": true, 00:24:34.113 "nvme_admin": true, 00:24:34.113 "nvme_io": true, 00:24:34.113 "nvme_io_md": false, 00:24:34.113 "write_zeroes": true, 00:24:34.113 "zcopy": false, 00:24:34.113 "get_zone_info": false, 00:24:34.113 "zone_management": false, 00:24:34.113 "zone_append": false, 00:24:34.113 "compare": false, 00:24:34.113 "compare_and_write": false, 00:24:34.113 "abort": true, 00:24:34.113 "seek_hole": false, 00:24:34.113 "seek_data": false, 00:24:34.113 "copy": false, 00:24:34.113 "nvme_iov_md": false 00:24:34.113 }, 00:24:34.113 "driver_specific": { 00:24:34.114 "nvme": [ 00:24:34.114 { 00:24:34.114 "pci_address": "0000:d8:00.0", 00:24:34.114 "trid": { 00:24:34.114 "trtype": "PCIe", 00:24:34.114 "traddr": "0000:d8:00.0" 00:24:34.114 }, 00:24:34.114 "ctrlr_data": { 00:24:34.114 "cntlid": 0, 00:24:34.114 "vendor_id": "0x8086", 00:24:34.114 "model_number": "INTEL SSDPE2KX020T8", 00:24:34.114 "serial_number": "BTLJ125505KA2P0BGN", 00:24:34.114 "firmware_revision": "VDV10170", 00:24:34.114 "oacs": { 00:24:34.114 "security": 0, 00:24:34.114 "format": 1, 00:24:34.114 "firmware": 1, 00:24:34.114 "ns_manage": 1 00:24:34.114 }, 00:24:34.114 "multi_ctrlr": false, 00:24:34.114 "ana_reporting": false 00:24:34.114 }, 00:24:34.114 "vs": { 00:24:34.114 "nvme_version": "1.2" 00:24:34.114 }, 00:24:34.114 "ns_data": { 00:24:34.114 "id": 1, 00:24:34.114 "can_share": false 00:24:34.114 } 00:24:34.114 } 00:24:34.114 ], 00:24:34.114 "mp_policy": "active_passive" 00:24:34.114 } 00:24:34.114 } 00:24:34.114 ] 00:24:34.114 10:31:58 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:34.114 10:31:58 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:34.114 [2024-07-15 10:31:58.885267] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x225ea10 PMD being used: compress_qat 00:24:35.488 4244f049-9c24-4820-baca-96aeb9d52890 00:24:35.488 10:31:59 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:35.488 c9982c26-33fb-48e9-ae44-410115ed7497 00:24:35.488 10:32:00 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:35.488 10:32:00 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:35.746 [ 00:24:35.746 { 00:24:35.746 "name": "c9982c26-33fb-48e9-ae44-410115ed7497", 00:24:35.746 "aliases": [ 00:24:35.746 "lvs0/lv0" 00:24:35.746 ], 00:24:35.746 "product_name": "Logical Volume", 00:24:35.746 "block_size": 512, 00:24:35.746 "num_blocks": 204800, 00:24:35.746 "uuid": "c9982c26-33fb-48e9-ae44-410115ed7497", 00:24:35.746 "assigned_rate_limits": { 00:24:35.746 "rw_ios_per_sec": 0, 00:24:35.746 "rw_mbytes_per_sec": 0, 00:24:35.746 "r_mbytes_per_sec": 0, 00:24:35.746 "w_mbytes_per_sec": 0 00:24:35.746 }, 00:24:35.746 "claimed": false, 00:24:35.746 "zoned": false, 00:24:35.746 "supported_io_types": { 00:24:35.746 "read": true, 00:24:35.746 "write": true, 00:24:35.746 "unmap": true, 00:24:35.746 "flush": false, 00:24:35.746 "reset": true, 00:24:35.746 "nvme_admin": false, 00:24:35.746 "nvme_io": false, 00:24:35.746 "nvme_io_md": false, 00:24:35.746 "write_zeroes": true, 00:24:35.746 "zcopy": false, 00:24:35.746 "get_zone_info": false, 00:24:35.746 "zone_management": false, 00:24:35.746 "zone_append": false, 00:24:35.746 "compare": false, 00:24:35.746 "compare_and_write": false, 00:24:35.746 "abort": false, 00:24:35.746 "seek_hole": true, 00:24:35.746 "seek_data": true, 00:24:35.746 "copy": false, 00:24:35.746 "nvme_iov_md": false 00:24:35.746 }, 00:24:35.746 "driver_specific": { 00:24:35.746 "lvol": { 00:24:35.746 "lvol_store_uuid": "4244f049-9c24-4820-baca-96aeb9d52890", 00:24:35.746 "base_bdev": "Nvme0n1", 00:24:35.746 "thin_provision": true, 00:24:35.746 "num_allocated_clusters": 0, 00:24:35.746 "snapshot": false, 00:24:35.746 "clone": false, 00:24:35.746 "esnap_clone": false 00:24:35.746 } 00:24:35.746 } 00:24:35.746 } 00:24:35.746 ] 00:24:35.746 10:32:00 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:35.746 10:32:00 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:24:35.746 10:32:00 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:24:36.004 [2024-07-15 10:32:00.583184] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:36.004 COMP_lvs0/lv0 00:24:36.005 10:32:00 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:36.005 10:32:00 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:36.263 [ 00:24:36.263 { 00:24:36.263 "name": "COMP_lvs0/lv0", 00:24:36.263 "aliases": [ 00:24:36.263 "c531a5e0-8c42-54fd-a95d-eb68c26a1606" 00:24:36.263 ], 00:24:36.263 "product_name": "compress", 00:24:36.263 "block_size": 4096, 00:24:36.263 "num_blocks": 25088, 00:24:36.263 "uuid": "c531a5e0-8c42-54fd-a95d-eb68c26a1606", 00:24:36.263 "assigned_rate_limits": { 00:24:36.263 "rw_ios_per_sec": 0, 00:24:36.263 "rw_mbytes_per_sec": 0, 00:24:36.263 "r_mbytes_per_sec": 0, 00:24:36.263 "w_mbytes_per_sec": 0 00:24:36.263 }, 00:24:36.263 "claimed": false, 00:24:36.263 "zoned": false, 00:24:36.263 "supported_io_types": { 00:24:36.263 "read": true, 00:24:36.263 "write": true, 00:24:36.263 "unmap": false, 00:24:36.263 "flush": false, 00:24:36.263 "reset": false, 00:24:36.263 "nvme_admin": false, 00:24:36.263 "nvme_io": false, 00:24:36.263 "nvme_io_md": false, 00:24:36.263 "write_zeroes": true, 00:24:36.263 "zcopy": false, 00:24:36.263 "get_zone_info": false, 00:24:36.263 "zone_management": false, 00:24:36.263 "zone_append": false, 00:24:36.263 "compare": false, 00:24:36.263 "compare_and_write": false, 00:24:36.263 "abort": false, 00:24:36.263 "seek_hole": false, 00:24:36.263 "seek_data": false, 00:24:36.263 "copy": false, 00:24:36.263 "nvme_iov_md": false 00:24:36.263 }, 00:24:36.263 "driver_specific": { 00:24:36.263 "compress": { 00:24:36.263 "name": "COMP_lvs0/lv0", 00:24:36.263 "base_bdev_name": "c9982c26-33fb-48e9-ae44-410115ed7497" 00:24:36.263 } 00:24:36.263 } 00:24:36.263 } 00:24:36.263 ] 00:24:36.263 10:32:00 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:36.263 10:32:00 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:36.263 [2024-07-15 10:32:01.033192] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f36041b15c0 PMD being used: compress_qat 00:24:36.263 [2024-07-15 10:32:01.034871] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x240c3d0 PMD being used: compress_qat 00:24:36.263 Running I/O for 3 seconds... 00:24:39.547 00:24:39.547 Latency(us) 00:24:39.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.547 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:24:39.547 Verification LBA range: start 0x0 length 0x3100 00:24:39.547 COMP_lvs0/lv0 : 3.01 3980.44 15.55 0.00 0.00 7998.93 175.31 15518.92 00:24:39.547 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:24:39.547 Verification LBA range: start 0x3100 length 0x3100 00:24:39.547 COMP_lvs0/lv0 : 3.01 4076.57 15.92 0.00 0.00 7813.27 167.94 15309.21 00:24:39.547 =================================================================================================================== 00:24:39.547 Total : 8057.01 31.47 0.00 0.00 7905.02 167.94 15518.92 00:24:39.547 0 00:24:39.547 10:32:04 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:24:39.547 10:32:04 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:39.547 10:32:04 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:39.805 10:32:04 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:39.805 10:32:04 compress_compdev -- compress/compress.sh@78 -- # killprocess 1916870 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1916870 ']' 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1916870 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916870 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916870' 00:24:39.805 killing process with pid 1916870 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@967 -- # kill 1916870 00:24:39.805 Received shutdown signal, test time was about 3.000000 seconds 00:24:39.805 00:24:39.805 Latency(us) 00:24:39.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.805 =================================================================================================================== 00:24:39.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.805 10:32:04 compress_compdev -- common/autotest_common.sh@972 -- # wait 1916870 00:24:42.333 10:32:06 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:24:42.333 10:32:06 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:24:42.333 10:32:06 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=1918833 00:24:42.333 10:32:06 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.333 10:32:06 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:24:42.333 10:32:06 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 1918833 00:24:42.333 10:32:06 compress_compdev -- common/autotest_common.sh@829 -- # '[' -z 1918833 ']' 00:24:42.333 10:32:06 compress_compdev -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.333 10:32:06 compress_compdev -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.333 10:32:06 compress_compdev -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.333 10:32:06 compress_compdev -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.333 10:32:06 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:42.333 [2024-07-15 10:32:06.960693] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:42.333 [2024-07-15 10:32:06.960743] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918833 ] 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:42.333 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:42.333 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:42.334 [2024-07-15 10:32:07.054129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:42.592 [2024-07-15 10:32:07.125174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.592 [2024-07-15 10:32:07.125248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.592 [2024-07-15 10:32:07.125250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.849 [2024-07-15 10:32:07.631070] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:24:43.108 10:32:07 compress_compdev -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.108 10:32:07 compress_compdev -- common/autotest_common.sh@862 -- # return 0 00:24:43.108 10:32:07 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:24:43.108 10:32:07 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:43.108 10:32:07 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:46.390 [2024-07-15 10:32:10.773203] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1bffaa0 PMD being used: compress_qat 00:24:46.390 10:32:10 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:46.390 10:32:10 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:46.390 [ 00:24:46.390 { 00:24:46.390 "name": "Nvme0n1", 00:24:46.390 "aliases": [ 00:24:46.390 "4afc7ed1-37e9-449b-ae96-6817f5b77700" 00:24:46.390 ], 00:24:46.390 "product_name": "NVMe disk", 00:24:46.390 "block_size": 512, 00:24:46.390 "num_blocks": 3907029168, 00:24:46.390 "uuid": "4afc7ed1-37e9-449b-ae96-6817f5b77700", 00:24:46.390 "assigned_rate_limits": { 00:24:46.390 "rw_ios_per_sec": 0, 00:24:46.390 "rw_mbytes_per_sec": 0, 00:24:46.390 "r_mbytes_per_sec": 0, 00:24:46.390 "w_mbytes_per_sec": 0 00:24:46.390 }, 00:24:46.390 "claimed": false, 00:24:46.390 "zoned": false, 00:24:46.390 "supported_io_types": { 00:24:46.390 "read": true, 00:24:46.390 "write": true, 00:24:46.390 "unmap": true, 00:24:46.390 "flush": true, 00:24:46.390 "reset": true, 00:24:46.390 "nvme_admin": true, 00:24:46.390 "nvme_io": true, 00:24:46.390 "nvme_io_md": false, 00:24:46.390 "write_zeroes": true, 00:24:46.390 "zcopy": false, 00:24:46.390 "get_zone_info": false, 00:24:46.390 "zone_management": false, 00:24:46.390 "zone_append": false, 00:24:46.390 "compare": false, 00:24:46.390 "compare_and_write": false, 00:24:46.390 "abort": true, 00:24:46.390 "seek_hole": false, 00:24:46.390 "seek_data": false, 00:24:46.390 "copy": false, 00:24:46.390 "nvme_iov_md": false 00:24:46.390 }, 00:24:46.390 "driver_specific": { 00:24:46.390 "nvme": [ 00:24:46.390 { 00:24:46.390 "pci_address": "0000:d8:00.0", 00:24:46.390 "trid": { 00:24:46.390 "trtype": "PCIe", 00:24:46.390 "traddr": "0000:d8:00.0" 00:24:46.390 }, 00:24:46.390 "ctrlr_data": { 00:24:46.390 "cntlid": 0, 00:24:46.390 "vendor_id": "0x8086", 00:24:46.390 "model_number": "INTEL SSDPE2KX020T8", 00:24:46.390 "serial_number": "BTLJ125505KA2P0BGN", 00:24:46.390 "firmware_revision": "VDV10170", 00:24:46.390 "oacs": { 00:24:46.390 "security": 0, 00:24:46.390 "format": 1, 00:24:46.390 "firmware": 1, 00:24:46.390 "ns_manage": 1 00:24:46.390 }, 00:24:46.390 "multi_ctrlr": false, 00:24:46.390 "ana_reporting": false 00:24:46.390 }, 00:24:46.390 "vs": { 00:24:46.390 "nvme_version": "1.2" 00:24:46.390 }, 00:24:46.390 "ns_data": { 00:24:46.390 "id": 1, 00:24:46.390 "can_share": false 00:24:46.390 } 00:24:46.390 } 00:24:46.390 ], 00:24:46.390 "mp_policy": "active_passive" 00:24:46.390 } 00:24:46.390 } 00:24:46.390 ] 00:24:46.390 10:32:11 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:46.390 10:32:11 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:46.649 [2024-07-15 10:32:11.288812] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a4e0b0 PMD being used: compress_qat 00:24:47.582 33c829e5-63e1-4551-af55-14b0957a698b 00:24:47.582 10:32:12 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:47.839 a0965aec-99bf-4d98-8326-ba825ffc32cc 00:24:47.840 10:32:12 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:47.840 10:32:12 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:48.097 [ 00:24:48.097 { 00:24:48.097 "name": "a0965aec-99bf-4d98-8326-ba825ffc32cc", 00:24:48.097 "aliases": [ 00:24:48.097 "lvs0/lv0" 00:24:48.097 ], 00:24:48.097 "product_name": "Logical Volume", 00:24:48.097 "block_size": 512, 00:24:48.097 "num_blocks": 204800, 00:24:48.097 "uuid": "a0965aec-99bf-4d98-8326-ba825ffc32cc", 00:24:48.097 "assigned_rate_limits": { 00:24:48.097 "rw_ios_per_sec": 0, 00:24:48.097 "rw_mbytes_per_sec": 0, 00:24:48.097 "r_mbytes_per_sec": 0, 00:24:48.097 "w_mbytes_per_sec": 0 00:24:48.097 }, 00:24:48.097 "claimed": false, 00:24:48.097 "zoned": false, 00:24:48.097 "supported_io_types": { 00:24:48.097 "read": true, 00:24:48.097 "write": true, 00:24:48.097 "unmap": true, 00:24:48.097 "flush": false, 00:24:48.097 "reset": true, 00:24:48.098 "nvme_admin": false, 00:24:48.098 "nvme_io": false, 00:24:48.098 "nvme_io_md": false, 00:24:48.098 "write_zeroes": true, 00:24:48.098 "zcopy": false, 00:24:48.098 "get_zone_info": false, 00:24:48.098 "zone_management": false, 00:24:48.098 "zone_append": false, 00:24:48.098 "compare": false, 00:24:48.098 "compare_and_write": false, 00:24:48.098 "abort": false, 00:24:48.098 "seek_hole": true, 00:24:48.098 "seek_data": true, 00:24:48.098 "copy": false, 00:24:48.098 "nvme_iov_md": false 00:24:48.098 }, 00:24:48.098 "driver_specific": { 00:24:48.098 "lvol": { 00:24:48.098 "lvol_store_uuid": "33c829e5-63e1-4551-af55-14b0957a698b", 00:24:48.098 "base_bdev": "Nvme0n1", 00:24:48.098 "thin_provision": true, 00:24:48.098 "num_allocated_clusters": 0, 00:24:48.098 "snapshot": false, 00:24:48.098 "clone": false, 00:24:48.098 "esnap_clone": false 00:24:48.098 } 00:24:48.098 } 00:24:48.098 } 00:24:48.098 ] 00:24:48.098 10:32:12 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:48.098 10:32:12 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:48.098 10:32:12 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:48.356 [2024-07-15 10:32:12.910344] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:48.356 COMP_lvs0/lv0 00:24:48.356 10:32:12 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:48.356 10:32:12 compress_compdev -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:48.356 10:32:12 compress_compdev -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:48.356 10:32:12 compress_compdev -- common/autotest_common.sh@899 -- # local i 00:24:48.356 10:32:12 compress_compdev -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:48.356 10:32:12 compress_compdev -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:48.356 10:32:12 compress_compdev -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:48.356 10:32:13 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:48.613 [ 00:24:48.614 { 00:24:48.614 "name": "COMP_lvs0/lv0", 00:24:48.614 "aliases": [ 00:24:48.614 "2e1bdab8-7a3a-5e7d-ae6e-4c3ed8279f06" 00:24:48.614 ], 00:24:48.614 "product_name": "compress", 00:24:48.614 "block_size": 512, 00:24:48.614 "num_blocks": 200704, 00:24:48.614 "uuid": "2e1bdab8-7a3a-5e7d-ae6e-4c3ed8279f06", 00:24:48.614 "assigned_rate_limits": { 00:24:48.614 "rw_ios_per_sec": 0, 00:24:48.614 "rw_mbytes_per_sec": 0, 00:24:48.614 "r_mbytes_per_sec": 0, 00:24:48.614 "w_mbytes_per_sec": 0 00:24:48.614 }, 00:24:48.614 "claimed": false, 00:24:48.614 "zoned": false, 00:24:48.614 "supported_io_types": { 00:24:48.614 "read": true, 00:24:48.614 "write": true, 00:24:48.614 "unmap": false, 00:24:48.614 "flush": false, 00:24:48.614 "reset": false, 00:24:48.614 "nvme_admin": false, 00:24:48.614 "nvme_io": false, 00:24:48.614 "nvme_io_md": false, 00:24:48.614 "write_zeroes": true, 00:24:48.614 "zcopy": false, 00:24:48.614 "get_zone_info": false, 00:24:48.614 "zone_management": false, 00:24:48.614 "zone_append": false, 00:24:48.614 "compare": false, 00:24:48.614 "compare_and_write": false, 00:24:48.614 "abort": false, 00:24:48.614 "seek_hole": false, 00:24:48.614 "seek_data": false, 00:24:48.614 "copy": false, 00:24:48.614 "nvme_iov_md": false 00:24:48.614 }, 00:24:48.614 "driver_specific": { 00:24:48.614 "compress": { 00:24:48.614 "name": "COMP_lvs0/lv0", 00:24:48.614 "base_bdev_name": "a0965aec-99bf-4d98-8326-ba825ffc32cc" 00:24:48.614 } 00:24:48.614 } 00:24:48.614 } 00:24:48.614 ] 00:24:48.614 10:32:13 compress_compdev -- common/autotest_common.sh@905 -- # return 0 00:24:48.614 10:32:13 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:48.614 [2024-07-15 10:32:13.359374] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f92b81b1350 PMD being used: compress_qat 00:24:48.614 I/O targets: 00:24:48.614 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:24:48.614 00:24:48.614 00:24:48.614 CUnit - A unit testing framework for C - Version 2.1-3 00:24:48.614 http://cunit.sourceforge.net/ 00:24:48.614 00:24:48.614 00:24:48.614 Suite: bdevio tests on: COMP_lvs0/lv0 00:24:48.614 Test: blockdev write read block ...passed 00:24:48.614 Test: blockdev write zeroes read block ...passed 00:24:48.614 Test: blockdev write zeroes read no split ...passed 00:24:48.614 Test: blockdev write zeroes read split ...passed 00:24:48.872 Test: blockdev write zeroes read split partial ...passed 00:24:48.872 Test: blockdev reset ...[2024-07-15 10:32:13.419680] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:24:48.872 passed 00:24:48.872 Test: blockdev write read 8 blocks ...passed 00:24:48.872 Test: blockdev write read size > 128k ...passed 00:24:48.872 Test: blockdev write read invalid size ...passed 00:24:48.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:48.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:48.872 Test: blockdev write read max offset ...passed 00:24:48.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:48.872 Test: blockdev writev readv 8 blocks ...passed 00:24:48.872 Test: blockdev writev readv 30 x 1block ...passed 00:24:48.872 Test: blockdev writev readv block ...passed 00:24:48.872 Test: blockdev writev readv size > 128k ...passed 00:24:48.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:48.872 Test: blockdev comparev and writev ...passed 00:24:48.872 Test: blockdev nvme passthru rw ...passed 00:24:48.872 Test: blockdev nvme passthru vendor specific ...passed 00:24:48.872 Test: blockdev nvme admin passthru ...passed 00:24:48.872 Test: blockdev copy ...passed 00:24:48.872 00:24:48.872 Run Summary: Type Total Ran Passed Failed Inactive 00:24:48.872 suites 1 1 n/a 0 0 00:24:48.872 tests 23 23 23 0 0 00:24:48.872 asserts 130 130 130 0 n/a 00:24:48.872 00:24:48.872 Elapsed time = 0.203 seconds 00:24:48.872 0 00:24:48.872 10:32:13 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:24:48.872 10:32:13 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:24:48.872 10:32:13 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:24:49.130 10:32:13 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:24:49.130 10:32:13 compress_compdev -- compress/compress.sh@62 -- # killprocess 1918833 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@948 -- # '[' -z 1918833 ']' 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@952 -- # kill -0 1918833 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@953 -- # uname 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1918833 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1918833' 00:24:49.130 killing process with pid 1918833 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@967 -- # kill 1918833 00:24:49.130 10:32:13 compress_compdev -- common/autotest_common.sh@972 -- # wait 1918833 00:24:51.661 10:32:16 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:24:51.661 10:32:16 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:24:51.661 00:24:51.661 real 0m46.583s 00:24:51.661 user 1m43.309s 00:24:51.661 sys 0m4.434s 00:24:51.661 10:32:16 compress_compdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.661 10:32:16 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:24:51.661 ************************************ 00:24:51.661 END TEST compress_compdev 00:24:51.661 ************************************ 00:24:51.661 10:32:16 -- common/autotest_common.sh@1142 -- # return 0 00:24:51.661 10:32:16 -- spdk/autotest.sh@349 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:24:51.661 10:32:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:51.661 10:32:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.661 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:24:51.661 ************************************ 00:24:51.661 START TEST compress_isal 00:24:51.661 ************************************ 00:24:51.661 10:32:16 compress_isal -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:24:51.661 * Looking for test storage... 00:24:51.661 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:24:51.661 10:32:16 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:24:51.661 10:32:16 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.661 10:32:16 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.661 10:32:16 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.661 10:32:16 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.661 10:32:16 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.661 10:32:16 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.661 10:32:16 compress_isal -- paths/export.sh@5 -- # export PATH 00:24:51.661 10:32:16 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@47 -- # : 0 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.661 10:32:16 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.662 10:32:16 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.662 10:32:16 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1920466 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1920466 00:24:51.662 10:32:16 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1920466 ']' 00:24:51.662 10:32:16 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:24:51.662 10:32:16 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.662 10:32:16 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.662 10:32:16 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.662 10:32:16 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.662 10:32:16 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:24:51.920 [2024-07-15 10:32:16.496432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:51.920 [2024-07-15 10:32:16.496484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920466 ] 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:51.920 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.920 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:51.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.921 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:51.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.921 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:51.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.921 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:51.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.921 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:51.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.921 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:51.921 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:51.921 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:51.921 [2024-07-15 10:32:16.587873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.921 [2024-07-15 10:32:16.662147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.921 [2024-07-15 10:32:16.662151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.886 10:32:17 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.886 10:32:17 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:24:52.886 10:32:17 compress_isal -- compress/compress.sh@74 -- # create_vols 00:24:52.886 10:32:17 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:52.886 10:32:17 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:56.166 10:32:20 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@899 -- # local i 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:24:56.166 [ 00:24:56.166 { 00:24:56.166 "name": "Nvme0n1", 00:24:56.166 "aliases": [ 00:24:56.166 "f15a4766-3a49-4476-8237-fa87fa0ff5c1" 00:24:56.166 ], 00:24:56.166 "product_name": "NVMe disk", 00:24:56.166 "block_size": 512, 00:24:56.166 "num_blocks": 3907029168, 00:24:56.166 "uuid": "f15a4766-3a49-4476-8237-fa87fa0ff5c1", 00:24:56.166 "assigned_rate_limits": { 00:24:56.166 "rw_ios_per_sec": 0, 00:24:56.166 "rw_mbytes_per_sec": 0, 00:24:56.166 "r_mbytes_per_sec": 0, 00:24:56.166 "w_mbytes_per_sec": 0 00:24:56.166 }, 00:24:56.166 "claimed": false, 00:24:56.166 "zoned": false, 00:24:56.166 "supported_io_types": { 00:24:56.166 "read": true, 00:24:56.166 "write": true, 00:24:56.166 "unmap": true, 00:24:56.166 "flush": true, 00:24:56.166 "reset": true, 00:24:56.166 "nvme_admin": true, 00:24:56.166 "nvme_io": true, 00:24:56.166 "nvme_io_md": false, 00:24:56.166 "write_zeroes": true, 00:24:56.166 "zcopy": false, 00:24:56.166 "get_zone_info": false, 00:24:56.166 "zone_management": false, 00:24:56.166 "zone_append": false, 00:24:56.166 "compare": false, 00:24:56.166 "compare_and_write": false, 00:24:56.166 "abort": true, 00:24:56.166 "seek_hole": false, 00:24:56.166 "seek_data": false, 00:24:56.166 "copy": false, 00:24:56.166 "nvme_iov_md": false 00:24:56.166 }, 00:24:56.166 "driver_specific": { 00:24:56.166 "nvme": [ 00:24:56.166 { 00:24:56.166 "pci_address": "0000:d8:00.0", 00:24:56.166 "trid": { 00:24:56.166 "trtype": "PCIe", 00:24:56.166 "traddr": "0000:d8:00.0" 00:24:56.166 }, 00:24:56.166 "ctrlr_data": { 00:24:56.166 "cntlid": 0, 00:24:56.166 "vendor_id": "0x8086", 00:24:56.166 "model_number": "INTEL SSDPE2KX020T8", 00:24:56.166 "serial_number": "BTLJ125505KA2P0BGN", 00:24:56.166 "firmware_revision": "VDV10170", 00:24:56.166 "oacs": { 00:24:56.166 "security": 0, 00:24:56.166 "format": 1, 00:24:56.166 "firmware": 1, 00:24:56.166 "ns_manage": 1 00:24:56.166 }, 00:24:56.166 "multi_ctrlr": false, 00:24:56.166 "ana_reporting": false 00:24:56.166 }, 00:24:56.166 "vs": { 00:24:56.166 "nvme_version": "1.2" 00:24:56.166 }, 00:24:56.166 "ns_data": { 00:24:56.166 "id": 1, 00:24:56.166 "can_share": false 00:24:56.166 } 00:24:56.166 } 00:24:56.166 ], 00:24:56.166 "mp_policy": "active_passive" 00:24:56.166 } 00:24:56.166 } 00:24:56.166 ] 00:24:56.166 10:32:20 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:24:56.166 10:32:20 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:24:57.100 40993c67-4d29-46fd-ba99-b02070da32f1 00:24:57.100 10:32:21 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:24:57.358 b26c78e8-e7b8-4855-b53c-a9e27c63de5c 00:24:57.358 10:32:22 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:24:57.358 10:32:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:24:57.358 10:32:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:57.358 10:32:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:24:57.358 10:32:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:57.358 10:32:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:57.358 10:32:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:57.616 10:32:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:24:57.616 [ 00:24:57.616 { 00:24:57.616 "name": "b26c78e8-e7b8-4855-b53c-a9e27c63de5c", 00:24:57.616 "aliases": [ 00:24:57.616 "lvs0/lv0" 00:24:57.616 ], 00:24:57.616 "product_name": "Logical Volume", 00:24:57.616 "block_size": 512, 00:24:57.616 "num_blocks": 204800, 00:24:57.616 "uuid": "b26c78e8-e7b8-4855-b53c-a9e27c63de5c", 00:24:57.616 "assigned_rate_limits": { 00:24:57.616 "rw_ios_per_sec": 0, 00:24:57.616 "rw_mbytes_per_sec": 0, 00:24:57.616 "r_mbytes_per_sec": 0, 00:24:57.616 "w_mbytes_per_sec": 0 00:24:57.616 }, 00:24:57.616 "claimed": false, 00:24:57.616 "zoned": false, 00:24:57.616 "supported_io_types": { 00:24:57.616 "read": true, 00:24:57.616 "write": true, 00:24:57.616 "unmap": true, 00:24:57.616 "flush": false, 00:24:57.616 "reset": true, 00:24:57.616 "nvme_admin": false, 00:24:57.616 "nvme_io": false, 00:24:57.616 "nvme_io_md": false, 00:24:57.616 "write_zeroes": true, 00:24:57.616 "zcopy": false, 00:24:57.616 "get_zone_info": false, 00:24:57.616 "zone_management": false, 00:24:57.616 "zone_append": false, 00:24:57.616 "compare": false, 00:24:57.616 "compare_and_write": false, 00:24:57.616 "abort": false, 00:24:57.616 "seek_hole": true, 00:24:57.616 "seek_data": true, 00:24:57.616 "copy": false, 00:24:57.616 "nvme_iov_md": false 00:24:57.616 }, 00:24:57.616 "driver_specific": { 00:24:57.616 "lvol": { 00:24:57.616 "lvol_store_uuid": "40993c67-4d29-46fd-ba99-b02070da32f1", 00:24:57.616 "base_bdev": "Nvme0n1", 00:24:57.616 "thin_provision": true, 00:24:57.616 "num_allocated_clusters": 0, 00:24:57.616 "snapshot": false, 00:24:57.616 "clone": false, 00:24:57.616 "esnap_clone": false 00:24:57.616 } 00:24:57.616 } 00:24:57.616 } 00:24:57.616 ] 00:24:57.616 10:32:22 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:24:57.616 10:32:22 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:24:57.616 10:32:22 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:24:57.875 [2024-07-15 10:32:22.526725] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:24:57.875 COMP_lvs0/lv0 00:24:57.875 10:32:22 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:24:57.875 10:32:22 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:24:57.875 10:32:22 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:57.875 10:32:22 compress_isal -- common/autotest_common.sh@899 -- # local i 00:24:57.875 10:32:22 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:57.875 10:32:22 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:57.875 10:32:22 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:58.133 10:32:22 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:24:58.133 [ 00:24:58.133 { 00:24:58.133 "name": "COMP_lvs0/lv0", 00:24:58.133 "aliases": [ 00:24:58.133 "4b14d3d8-149f-5835-b601-188560a86289" 00:24:58.133 ], 00:24:58.133 "product_name": "compress", 00:24:58.133 "block_size": 512, 00:24:58.133 "num_blocks": 200704, 00:24:58.133 "uuid": "4b14d3d8-149f-5835-b601-188560a86289", 00:24:58.133 "assigned_rate_limits": { 00:24:58.133 "rw_ios_per_sec": 0, 00:24:58.133 "rw_mbytes_per_sec": 0, 00:24:58.133 "r_mbytes_per_sec": 0, 00:24:58.133 "w_mbytes_per_sec": 0 00:24:58.133 }, 00:24:58.133 "claimed": false, 00:24:58.133 "zoned": false, 00:24:58.133 "supported_io_types": { 00:24:58.133 "read": true, 00:24:58.133 "write": true, 00:24:58.133 "unmap": false, 00:24:58.133 "flush": false, 00:24:58.133 "reset": false, 00:24:58.133 "nvme_admin": false, 00:24:58.133 "nvme_io": false, 00:24:58.133 "nvme_io_md": false, 00:24:58.133 "write_zeroes": true, 00:24:58.133 "zcopy": false, 00:24:58.133 "get_zone_info": false, 00:24:58.133 "zone_management": false, 00:24:58.133 "zone_append": false, 00:24:58.133 "compare": false, 00:24:58.133 "compare_and_write": false, 00:24:58.133 "abort": false, 00:24:58.133 "seek_hole": false, 00:24:58.133 "seek_data": false, 00:24:58.133 "copy": false, 00:24:58.133 "nvme_iov_md": false 00:24:58.133 }, 00:24:58.133 "driver_specific": { 00:24:58.133 "compress": { 00:24:58.133 "name": "COMP_lvs0/lv0", 00:24:58.133 "base_bdev_name": "b26c78e8-e7b8-4855-b53c-a9e27c63de5c" 00:24:58.133 } 00:24:58.133 } 00:24:58.133 } 00:24:58.133 ] 00:24:58.133 10:32:22 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:24:58.133 10:32:22 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:58.392 Running I/O for 3 seconds... 00:25:01.687 00:25:01.687 Latency(us) 00:25:01.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.687 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:01.687 Verification LBA range: start 0x0 length 0x3100 00:25:01.687 COMP_lvs0/lv0 : 3.01 3502.25 13.68 0.00 0.00 9098.59 55.71 16986.93 00:25:01.687 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:01.687 Verification LBA range: start 0x3100 length 0x3100 00:25:01.687 COMP_lvs0/lv0 : 3.01 3504.84 13.69 0.00 0.00 9088.12 54.07 17091.79 00:25:01.687 =================================================================================================================== 00:25:01.687 Total : 7007.09 27.37 0.00 0.00 9093.35 54.07 17091.79 00:25:01.687 0 00:25:01.687 10:32:25 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:01.687 10:32:25 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:01.687 10:32:26 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:01.687 10:32:26 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:01.687 10:32:26 compress_isal -- compress/compress.sh@78 -- # killprocess 1920466 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1920466 ']' 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1920466 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1920466 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1920466' 00:25:01.687 killing process with pid 1920466 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@967 -- # kill 1920466 00:25:01.687 Received shutdown signal, test time was about 3.000000 seconds 00:25:01.687 00:25:01.687 Latency(us) 00:25:01.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.687 =================================================================================================================== 00:25:01.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.687 10:32:26 compress_isal -- common/autotest_common.sh@972 -- # wait 1920466 00:25:04.218 10:32:28 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:25:04.218 10:32:28 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:04.218 10:32:28 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1922584 00:25:04.218 10:32:28 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:04.218 10:32:28 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:04.218 10:32:28 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1922584 00:25:04.218 10:32:28 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1922584 ']' 00:25:04.218 10:32:28 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.218 10:32:28 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:04.218 10:32:28 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.218 10:32:28 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:04.218 10:32:28 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:04.218 [2024-07-15 10:32:28.903278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:04.218 [2024-07-15 10:32:28.903327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922584 ] 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:04.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:04.218 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:04.218 [2024-07-15 10:32:28.993883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:04.477 [2024-07-15 10:32:29.069450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.477 [2024-07-15 10:32:29.069453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.044 10:32:29 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:05.044 10:32:29 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:05.044 10:32:29 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:25:05.044 10:32:29 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:05.044 10:32:29 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:08.326 10:32:32 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:08.326 10:32:32 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:08.326 [ 00:25:08.326 { 00:25:08.326 "name": "Nvme0n1", 00:25:08.326 "aliases": [ 00:25:08.326 "010bbbf0-8eac-4c6f-9610-c71312f85591" 00:25:08.326 ], 00:25:08.326 "product_name": "NVMe disk", 00:25:08.326 "block_size": 512, 00:25:08.326 "num_blocks": 3907029168, 00:25:08.326 "uuid": "010bbbf0-8eac-4c6f-9610-c71312f85591", 00:25:08.326 "assigned_rate_limits": { 00:25:08.326 "rw_ios_per_sec": 0, 00:25:08.326 "rw_mbytes_per_sec": 0, 00:25:08.326 "r_mbytes_per_sec": 0, 00:25:08.326 "w_mbytes_per_sec": 0 00:25:08.326 }, 00:25:08.326 "claimed": false, 00:25:08.326 "zoned": false, 00:25:08.326 "supported_io_types": { 00:25:08.326 "read": true, 00:25:08.326 "write": true, 00:25:08.326 "unmap": true, 00:25:08.326 "flush": true, 00:25:08.326 "reset": true, 00:25:08.326 "nvme_admin": true, 00:25:08.326 "nvme_io": true, 00:25:08.326 "nvme_io_md": false, 00:25:08.326 "write_zeroes": true, 00:25:08.326 "zcopy": false, 00:25:08.326 "get_zone_info": false, 00:25:08.326 "zone_management": false, 00:25:08.326 "zone_append": false, 00:25:08.326 "compare": false, 00:25:08.326 "compare_and_write": false, 00:25:08.326 "abort": true, 00:25:08.326 "seek_hole": false, 00:25:08.326 "seek_data": false, 00:25:08.326 "copy": false, 00:25:08.326 "nvme_iov_md": false 00:25:08.326 }, 00:25:08.326 "driver_specific": { 00:25:08.326 "nvme": [ 00:25:08.326 { 00:25:08.326 "pci_address": "0000:d8:00.0", 00:25:08.326 "trid": { 00:25:08.326 "trtype": "PCIe", 00:25:08.326 "traddr": "0000:d8:00.0" 00:25:08.326 }, 00:25:08.326 "ctrlr_data": { 00:25:08.326 "cntlid": 0, 00:25:08.326 "vendor_id": "0x8086", 00:25:08.326 "model_number": "INTEL SSDPE2KX020T8", 00:25:08.326 "serial_number": "BTLJ125505KA2P0BGN", 00:25:08.326 "firmware_revision": "VDV10170", 00:25:08.326 "oacs": { 00:25:08.326 "security": 0, 00:25:08.326 "format": 1, 00:25:08.326 "firmware": 1, 00:25:08.326 "ns_manage": 1 00:25:08.326 }, 00:25:08.326 "multi_ctrlr": false, 00:25:08.326 "ana_reporting": false 00:25:08.326 }, 00:25:08.326 "vs": { 00:25:08.326 "nvme_version": "1.2" 00:25:08.326 }, 00:25:08.326 "ns_data": { 00:25:08.326 "id": 1, 00:25:08.326 "can_share": false 00:25:08.326 } 00:25:08.326 } 00:25:08.326 ], 00:25:08.326 "mp_policy": "active_passive" 00:25:08.326 } 00:25:08.326 } 00:25:08.326 ] 00:25:08.326 10:32:33 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:08.326 10:32:33 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:09.699 390ef367-22b6-4db3-814e-8b9e494113c4 00:25:09.699 10:32:34 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:09.699 44cdc14c-a281-462a-ad5f-4e1ccd37b15b 00:25:09.699 10:32:34 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:09.699 10:32:34 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:09.699 10:32:34 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:09.699 10:32:34 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:09.699 10:32:34 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:09.699 10:32:34 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:09.699 10:32:34 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:09.956 10:32:34 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:09.956 [ 00:25:09.956 { 00:25:09.956 "name": "44cdc14c-a281-462a-ad5f-4e1ccd37b15b", 00:25:09.956 "aliases": [ 00:25:09.956 "lvs0/lv0" 00:25:09.956 ], 00:25:09.956 "product_name": "Logical Volume", 00:25:09.956 "block_size": 512, 00:25:09.956 "num_blocks": 204800, 00:25:09.956 "uuid": "44cdc14c-a281-462a-ad5f-4e1ccd37b15b", 00:25:09.956 "assigned_rate_limits": { 00:25:09.956 "rw_ios_per_sec": 0, 00:25:09.956 "rw_mbytes_per_sec": 0, 00:25:09.956 "r_mbytes_per_sec": 0, 00:25:09.956 "w_mbytes_per_sec": 0 00:25:09.956 }, 00:25:09.956 "claimed": false, 00:25:09.956 "zoned": false, 00:25:09.956 "supported_io_types": { 00:25:09.956 "read": true, 00:25:09.956 "write": true, 00:25:09.956 "unmap": true, 00:25:09.956 "flush": false, 00:25:09.956 "reset": true, 00:25:09.956 "nvme_admin": false, 00:25:09.956 "nvme_io": false, 00:25:09.956 "nvme_io_md": false, 00:25:09.956 "write_zeroes": true, 00:25:09.956 "zcopy": false, 00:25:09.956 "get_zone_info": false, 00:25:09.957 "zone_management": false, 00:25:09.957 "zone_append": false, 00:25:09.957 "compare": false, 00:25:09.957 "compare_and_write": false, 00:25:09.957 "abort": false, 00:25:09.957 "seek_hole": true, 00:25:09.957 "seek_data": true, 00:25:09.957 "copy": false, 00:25:09.957 "nvme_iov_md": false 00:25:09.957 }, 00:25:09.957 "driver_specific": { 00:25:09.957 "lvol": { 00:25:09.957 "lvol_store_uuid": "390ef367-22b6-4db3-814e-8b9e494113c4", 00:25:09.957 "base_bdev": "Nvme0n1", 00:25:09.957 "thin_provision": true, 00:25:09.957 "num_allocated_clusters": 0, 00:25:09.957 "snapshot": false, 00:25:09.957 "clone": false, 00:25:09.957 "esnap_clone": false 00:25:09.957 } 00:25:09.957 } 00:25:09.957 } 00:25:09.957 ] 00:25:09.957 10:32:34 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:09.957 10:32:34 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:25:09.957 10:32:34 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:25:10.215 [2024-07-15 10:32:34.894074] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:10.215 COMP_lvs0/lv0 00:25:10.215 10:32:34 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:10.215 10:32:34 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:10.215 10:32:34 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:10.215 10:32:34 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:10.215 10:32:34 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:10.215 10:32:34 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:10.215 10:32:34 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:10.473 10:32:35 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:10.473 [ 00:25:10.473 { 00:25:10.473 "name": "COMP_lvs0/lv0", 00:25:10.473 "aliases": [ 00:25:10.473 "97ceb3cd-5207-5b69-aff6-4f3f579c7019" 00:25:10.473 ], 00:25:10.473 "product_name": "compress", 00:25:10.473 "block_size": 512, 00:25:10.473 "num_blocks": 200704, 00:25:10.473 "uuid": "97ceb3cd-5207-5b69-aff6-4f3f579c7019", 00:25:10.473 "assigned_rate_limits": { 00:25:10.473 "rw_ios_per_sec": 0, 00:25:10.473 "rw_mbytes_per_sec": 0, 00:25:10.473 "r_mbytes_per_sec": 0, 00:25:10.473 "w_mbytes_per_sec": 0 00:25:10.473 }, 00:25:10.473 "claimed": false, 00:25:10.473 "zoned": false, 00:25:10.473 "supported_io_types": { 00:25:10.473 "read": true, 00:25:10.473 "write": true, 00:25:10.473 "unmap": false, 00:25:10.473 "flush": false, 00:25:10.473 "reset": false, 00:25:10.473 "nvme_admin": false, 00:25:10.473 "nvme_io": false, 00:25:10.473 "nvme_io_md": false, 00:25:10.473 "write_zeroes": true, 00:25:10.473 "zcopy": false, 00:25:10.473 "get_zone_info": false, 00:25:10.473 "zone_management": false, 00:25:10.473 "zone_append": false, 00:25:10.473 "compare": false, 00:25:10.473 "compare_and_write": false, 00:25:10.473 "abort": false, 00:25:10.473 "seek_hole": false, 00:25:10.473 "seek_data": false, 00:25:10.473 "copy": false, 00:25:10.473 "nvme_iov_md": false 00:25:10.473 }, 00:25:10.473 "driver_specific": { 00:25:10.473 "compress": { 00:25:10.473 "name": "COMP_lvs0/lv0", 00:25:10.473 "base_bdev_name": "44cdc14c-a281-462a-ad5f-4e1ccd37b15b" 00:25:10.473 } 00:25:10.473 } 00:25:10.473 } 00:25:10.473 ] 00:25:10.730 10:32:35 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:10.730 10:32:35 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:10.730 Running I/O for 3 seconds... 00:25:14.011 00:25:14.011 Latency(us) 00:25:14.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.011 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:14.011 Verification LBA range: start 0x0 length 0x3100 00:25:14.011 COMP_lvs0/lv0 : 3.01 3558.15 13.90 0.00 0.00 8957.05 56.93 14680.06 00:25:14.011 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:14.011 Verification LBA range: start 0x3100 length 0x3100 00:25:14.011 COMP_lvs0/lv0 : 3.01 3555.88 13.89 0.00 0.00 8964.08 56.12 14260.63 00:25:14.011 =================================================================================================================== 00:25:14.011 Total : 7114.03 27.79 0.00 0.00 8960.56 56.12 14680.06 00:25:14.011 0 00:25:14.011 10:32:38 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:14.011 10:32:38 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:14.011 10:32:38 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:14.011 10:32:38 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:14.011 10:32:38 compress_isal -- compress/compress.sh@78 -- # killprocess 1922584 00:25:14.011 10:32:38 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1922584 ']' 00:25:14.011 10:32:38 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1922584 00:25:14.011 10:32:38 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:14.011 10:32:38 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.011 10:32:38 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1922584 00:25:14.289 10:32:38 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:14.289 10:32:38 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:14.289 10:32:38 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1922584' 00:25:14.289 killing process with pid 1922584 00:25:14.289 10:32:38 compress_isal -- common/autotest_common.sh@967 -- # kill 1922584 00:25:14.289 Received shutdown signal, test time was about 3.000000 seconds 00:25:14.289 00:25:14.289 Latency(us) 00:25:14.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.289 =================================================================================================================== 00:25:14.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.289 10:32:38 compress_isal -- common/autotest_common.sh@972 -- # wait 1922584 00:25:16.814 10:32:41 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:25:16.814 10:32:41 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:16.814 10:32:41 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1924714 00:25:16.814 10:32:41 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:25:16.814 10:32:41 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:16.814 10:32:41 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1924714 00:25:16.814 10:32:41 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1924714 ']' 00:25:16.814 10:32:41 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.814 10:32:41 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.814 10:32:41 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.814 10:32:41 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.814 10:32:41 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:16.814 [2024-07-15 10:32:41.265736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:16.814 [2024-07-15 10:32:41.265786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924714 ] 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:16.814 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:16.814 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:16.814 [2024-07-15 10:32:41.356886] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:16.814 [2024-07-15 10:32:41.431595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.814 [2024-07-15 10:32:41.431599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.377 10:32:42 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.377 10:32:42 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:17.377 10:32:42 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:25:17.377 10:32:42 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:17.377 10:32:42 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:20.657 10:32:45 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:20.657 10:32:45 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:20.657 [ 00:25:20.657 { 00:25:20.657 "name": "Nvme0n1", 00:25:20.657 "aliases": [ 00:25:20.657 "eab986eb-b611-49df-82f8-77a6aeda9a39" 00:25:20.657 ], 00:25:20.657 "product_name": "NVMe disk", 00:25:20.657 "block_size": 512, 00:25:20.657 "num_blocks": 3907029168, 00:25:20.657 "uuid": "eab986eb-b611-49df-82f8-77a6aeda9a39", 00:25:20.657 "assigned_rate_limits": { 00:25:20.657 "rw_ios_per_sec": 0, 00:25:20.657 "rw_mbytes_per_sec": 0, 00:25:20.657 "r_mbytes_per_sec": 0, 00:25:20.657 "w_mbytes_per_sec": 0 00:25:20.657 }, 00:25:20.657 "claimed": false, 00:25:20.657 "zoned": false, 00:25:20.657 "supported_io_types": { 00:25:20.657 "read": true, 00:25:20.657 "write": true, 00:25:20.657 "unmap": true, 00:25:20.657 "flush": true, 00:25:20.657 "reset": true, 00:25:20.657 "nvme_admin": true, 00:25:20.657 "nvme_io": true, 00:25:20.657 "nvme_io_md": false, 00:25:20.657 "write_zeroes": true, 00:25:20.657 "zcopy": false, 00:25:20.657 "get_zone_info": false, 00:25:20.657 "zone_management": false, 00:25:20.657 "zone_append": false, 00:25:20.657 "compare": false, 00:25:20.657 "compare_and_write": false, 00:25:20.657 "abort": true, 00:25:20.657 "seek_hole": false, 00:25:20.657 "seek_data": false, 00:25:20.657 "copy": false, 00:25:20.657 "nvme_iov_md": false 00:25:20.657 }, 00:25:20.657 "driver_specific": { 00:25:20.657 "nvme": [ 00:25:20.657 { 00:25:20.657 "pci_address": "0000:d8:00.0", 00:25:20.657 "trid": { 00:25:20.657 "trtype": "PCIe", 00:25:20.657 "traddr": "0000:d8:00.0" 00:25:20.657 }, 00:25:20.657 "ctrlr_data": { 00:25:20.657 "cntlid": 0, 00:25:20.657 "vendor_id": "0x8086", 00:25:20.657 "model_number": "INTEL SSDPE2KX020T8", 00:25:20.657 "serial_number": "BTLJ125505KA2P0BGN", 00:25:20.657 "firmware_revision": "VDV10170", 00:25:20.657 "oacs": { 00:25:20.657 "security": 0, 00:25:20.657 "format": 1, 00:25:20.657 "firmware": 1, 00:25:20.657 "ns_manage": 1 00:25:20.657 }, 00:25:20.657 "multi_ctrlr": false, 00:25:20.657 "ana_reporting": false 00:25:20.657 }, 00:25:20.657 "vs": { 00:25:20.657 "nvme_version": "1.2" 00:25:20.657 }, 00:25:20.657 "ns_data": { 00:25:20.657 "id": 1, 00:25:20.657 "can_share": false 00:25:20.657 } 00:25:20.657 } 00:25:20.657 ], 00:25:20.657 "mp_policy": "active_passive" 00:25:20.657 } 00:25:20.657 } 00:25:20.657 ] 00:25:20.940 10:32:45 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:20.940 10:32:45 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:21.874 ff766b7f-3fca-4372-9f89-be7450fcb2a6 00:25:22.132 10:32:46 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:22.132 ce4d1149-fe47-47a8-a441-6926b08a9d66 00:25:22.132 10:32:46 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:22.133 10:32:46 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:22.133 10:32:46 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:22.133 10:32:46 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:22.133 10:32:46 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:22.133 10:32:46 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:22.133 10:32:46 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:22.390 10:32:46 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:22.390 [ 00:25:22.390 { 00:25:22.390 "name": "ce4d1149-fe47-47a8-a441-6926b08a9d66", 00:25:22.390 "aliases": [ 00:25:22.390 "lvs0/lv0" 00:25:22.390 ], 00:25:22.390 "product_name": "Logical Volume", 00:25:22.390 "block_size": 512, 00:25:22.390 "num_blocks": 204800, 00:25:22.390 "uuid": "ce4d1149-fe47-47a8-a441-6926b08a9d66", 00:25:22.390 "assigned_rate_limits": { 00:25:22.390 "rw_ios_per_sec": 0, 00:25:22.390 "rw_mbytes_per_sec": 0, 00:25:22.390 "r_mbytes_per_sec": 0, 00:25:22.390 "w_mbytes_per_sec": 0 00:25:22.390 }, 00:25:22.390 "claimed": false, 00:25:22.390 "zoned": false, 00:25:22.390 "supported_io_types": { 00:25:22.390 "read": true, 00:25:22.390 "write": true, 00:25:22.390 "unmap": true, 00:25:22.390 "flush": false, 00:25:22.390 "reset": true, 00:25:22.390 "nvme_admin": false, 00:25:22.390 "nvme_io": false, 00:25:22.390 "nvme_io_md": false, 00:25:22.390 "write_zeroes": true, 00:25:22.390 "zcopy": false, 00:25:22.390 "get_zone_info": false, 00:25:22.390 "zone_management": false, 00:25:22.390 "zone_append": false, 00:25:22.390 "compare": false, 00:25:22.390 "compare_and_write": false, 00:25:22.390 "abort": false, 00:25:22.390 "seek_hole": true, 00:25:22.390 "seek_data": true, 00:25:22.391 "copy": false, 00:25:22.391 "nvme_iov_md": false 00:25:22.391 }, 00:25:22.391 "driver_specific": { 00:25:22.391 "lvol": { 00:25:22.391 "lvol_store_uuid": "ff766b7f-3fca-4372-9f89-be7450fcb2a6", 00:25:22.391 "base_bdev": "Nvme0n1", 00:25:22.391 "thin_provision": true, 00:25:22.391 "num_allocated_clusters": 0, 00:25:22.391 "snapshot": false, 00:25:22.391 "clone": false, 00:25:22.391 "esnap_clone": false 00:25:22.391 } 00:25:22.391 } 00:25:22.391 } 00:25:22.391 ] 00:25:22.391 10:32:47 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:22.391 10:32:47 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:25:22.391 10:32:47 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:25:22.649 [2024-07-15 10:32:47.331550] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:22.649 COMP_lvs0/lv0 00:25:22.649 10:32:47 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:22.649 10:32:47 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:22.649 10:32:47 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:22.649 10:32:47 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:22.649 10:32:47 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:22.649 10:32:47 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:22.649 10:32:47 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:22.909 10:32:47 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:22.909 [ 00:25:22.909 { 00:25:22.909 "name": "COMP_lvs0/lv0", 00:25:22.909 "aliases": [ 00:25:22.909 "b552f793-7c13-5cd0-9896-b1f9995f8bed" 00:25:22.909 ], 00:25:22.909 "product_name": "compress", 00:25:22.909 "block_size": 4096, 00:25:22.909 "num_blocks": 25088, 00:25:22.909 "uuid": "b552f793-7c13-5cd0-9896-b1f9995f8bed", 00:25:22.909 "assigned_rate_limits": { 00:25:22.909 "rw_ios_per_sec": 0, 00:25:22.909 "rw_mbytes_per_sec": 0, 00:25:22.909 "r_mbytes_per_sec": 0, 00:25:22.909 "w_mbytes_per_sec": 0 00:25:22.909 }, 00:25:22.909 "claimed": false, 00:25:22.909 "zoned": false, 00:25:22.909 "supported_io_types": { 00:25:22.909 "read": true, 00:25:22.909 "write": true, 00:25:22.909 "unmap": false, 00:25:22.909 "flush": false, 00:25:22.909 "reset": false, 00:25:22.909 "nvme_admin": false, 00:25:22.909 "nvme_io": false, 00:25:22.909 "nvme_io_md": false, 00:25:22.909 "write_zeroes": true, 00:25:22.910 "zcopy": false, 00:25:22.910 "get_zone_info": false, 00:25:22.910 "zone_management": false, 00:25:22.910 "zone_append": false, 00:25:22.910 "compare": false, 00:25:22.910 "compare_and_write": false, 00:25:22.910 "abort": false, 00:25:22.910 "seek_hole": false, 00:25:22.910 "seek_data": false, 00:25:22.910 "copy": false, 00:25:22.910 "nvme_iov_md": false 00:25:22.910 }, 00:25:22.910 "driver_specific": { 00:25:22.910 "compress": { 00:25:22.910 "name": "COMP_lvs0/lv0", 00:25:22.910 "base_bdev_name": "ce4d1149-fe47-47a8-a441-6926b08a9d66" 00:25:22.910 } 00:25:22.910 } 00:25:22.910 } 00:25:22.910 ] 00:25:22.910 10:32:47 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:22.910 10:32:47 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:23.168 Running I/O for 3 seconds... 00:25:26.448 00:25:26.448 Latency(us) 00:25:26.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.448 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:25:26.448 Verification LBA range: start 0x0 length 0x3100 00:25:26.448 COMP_lvs0/lv0 : 3.01 3527.10 13.78 0.00 0.00 9033.93 57.75 16252.93 00:25:26.448 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:25:26.448 Verification LBA range: start 0x3100 length 0x3100 00:25:26.448 COMP_lvs0/lv0 : 3.01 3514.55 13.73 0.00 0.00 9053.35 55.30 15414.07 00:25:26.448 =================================================================================================================== 00:25:26.448 Total : 7041.65 27.51 0.00 0.00 9043.63 55.30 16252.93 00:25:26.448 0 00:25:26.448 10:32:50 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:25:26.448 10:32:50 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:26.448 10:32:50 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:26.448 10:32:51 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:26.448 10:32:51 compress_isal -- compress/compress.sh@78 -- # killprocess 1924714 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1924714 ']' 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1924714 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924714 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924714' 00:25:26.448 killing process with pid 1924714 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@967 -- # kill 1924714 00:25:26.448 Received shutdown signal, test time was about 3.000000 seconds 00:25:26.448 00:25:26.448 Latency(us) 00:25:26.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.448 =================================================================================================================== 00:25:26.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.448 10:32:51 compress_isal -- common/autotest_common.sh@972 -- # wait 1924714 00:25:28.977 10:32:53 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:25:28.977 10:32:53 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:25:28.977 10:32:53 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=1926702 00:25:28.977 10:32:53 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:28.977 10:32:53 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:25:28.977 10:32:53 compress_isal -- compress/compress.sh@57 -- # waitforlisten 1926702 00:25:28.977 10:32:53 compress_isal -- common/autotest_common.sh@829 -- # '[' -z 1926702 ']' 00:25:28.977 10:32:53 compress_isal -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.977 10:32:53 compress_isal -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:28.977 10:32:53 compress_isal -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.978 10:32:53 compress_isal -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:28.978 10:32:53 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:28.978 [2024-07-15 10:32:53.649951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:28.978 [2024-07-15 10:32:53.650002] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926702 ] 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:28.978 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:28.978 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:28.978 [2024-07-15 10:32:53.742277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:29.236 [2024-07-15 10:32:53.819600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.236 [2024-07-15 10:32:53.819694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.236 [2024-07-15 10:32:53.819695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.800 10:32:54 compress_isal -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.800 10:32:54 compress_isal -- common/autotest_common.sh@862 -- # return 0 00:25:29.800 10:32:54 compress_isal -- compress/compress.sh@58 -- # create_vols 00:25:29.800 10:32:54 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:29.800 10:32:54 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:33.083 10:32:57 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=Nvme0n1 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:25:33.083 [ 00:25:33.083 { 00:25:33.083 "name": "Nvme0n1", 00:25:33.083 "aliases": [ 00:25:33.083 "f2d005cc-2b66-43cd-8311-434bbb7b951c" 00:25:33.083 ], 00:25:33.083 "product_name": "NVMe disk", 00:25:33.083 "block_size": 512, 00:25:33.083 "num_blocks": 3907029168, 00:25:33.083 "uuid": "f2d005cc-2b66-43cd-8311-434bbb7b951c", 00:25:33.083 "assigned_rate_limits": { 00:25:33.083 "rw_ios_per_sec": 0, 00:25:33.083 "rw_mbytes_per_sec": 0, 00:25:33.083 "r_mbytes_per_sec": 0, 00:25:33.083 "w_mbytes_per_sec": 0 00:25:33.083 }, 00:25:33.083 "claimed": false, 00:25:33.083 "zoned": false, 00:25:33.083 "supported_io_types": { 00:25:33.083 "read": true, 00:25:33.083 "write": true, 00:25:33.083 "unmap": true, 00:25:33.083 "flush": true, 00:25:33.083 "reset": true, 00:25:33.083 "nvme_admin": true, 00:25:33.083 "nvme_io": true, 00:25:33.083 "nvme_io_md": false, 00:25:33.083 "write_zeroes": true, 00:25:33.083 "zcopy": false, 00:25:33.083 "get_zone_info": false, 00:25:33.083 "zone_management": false, 00:25:33.083 "zone_append": false, 00:25:33.083 "compare": false, 00:25:33.083 "compare_and_write": false, 00:25:33.083 "abort": true, 00:25:33.083 "seek_hole": false, 00:25:33.083 "seek_data": false, 00:25:33.083 "copy": false, 00:25:33.083 "nvme_iov_md": false 00:25:33.083 }, 00:25:33.083 "driver_specific": { 00:25:33.083 "nvme": [ 00:25:33.083 { 00:25:33.083 "pci_address": "0000:d8:00.0", 00:25:33.083 "trid": { 00:25:33.083 "trtype": "PCIe", 00:25:33.083 "traddr": "0000:d8:00.0" 00:25:33.083 }, 00:25:33.083 "ctrlr_data": { 00:25:33.083 "cntlid": 0, 00:25:33.083 "vendor_id": "0x8086", 00:25:33.083 "model_number": "INTEL SSDPE2KX020T8", 00:25:33.083 "serial_number": "BTLJ125505KA2P0BGN", 00:25:33.083 "firmware_revision": "VDV10170", 00:25:33.083 "oacs": { 00:25:33.083 "security": 0, 00:25:33.083 "format": 1, 00:25:33.083 "firmware": 1, 00:25:33.083 "ns_manage": 1 00:25:33.083 }, 00:25:33.083 "multi_ctrlr": false, 00:25:33.083 "ana_reporting": false 00:25:33.083 }, 00:25:33.083 "vs": { 00:25:33.083 "nvme_version": "1.2" 00:25:33.083 }, 00:25:33.083 "ns_data": { 00:25:33.083 "id": 1, 00:25:33.083 "can_share": false 00:25:33.083 } 00:25:33.083 } 00:25:33.083 ], 00:25:33.083 "mp_policy": "active_passive" 00:25:33.083 } 00:25:33.083 } 00:25:33.083 ] 00:25:33.083 10:32:57 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:33.083 10:32:57 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:25:34.459 c23d7cd5-b38c-433b-82c5-dc2f13784d5e 00:25:34.459 10:32:59 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:25:34.459 f8bb1a47-9d42-492d-b93b-39a6d99c1b26 00:25:34.459 10:32:59 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:25:34.459 10:32:59 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=lvs0/lv0 00:25:34.459 10:32:59 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:34.459 10:32:59 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:34.459 10:32:59 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:34.459 10:32:59 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:34.459 10:32:59 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:34.719 10:32:59 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:25:35.023 [ 00:25:35.023 { 00:25:35.023 "name": "f8bb1a47-9d42-492d-b93b-39a6d99c1b26", 00:25:35.023 "aliases": [ 00:25:35.023 "lvs0/lv0" 00:25:35.023 ], 00:25:35.023 "product_name": "Logical Volume", 00:25:35.023 "block_size": 512, 00:25:35.023 "num_blocks": 204800, 00:25:35.023 "uuid": "f8bb1a47-9d42-492d-b93b-39a6d99c1b26", 00:25:35.023 "assigned_rate_limits": { 00:25:35.023 "rw_ios_per_sec": 0, 00:25:35.023 "rw_mbytes_per_sec": 0, 00:25:35.023 "r_mbytes_per_sec": 0, 00:25:35.023 "w_mbytes_per_sec": 0 00:25:35.023 }, 00:25:35.023 "claimed": false, 00:25:35.023 "zoned": false, 00:25:35.023 "supported_io_types": { 00:25:35.023 "read": true, 00:25:35.023 "write": true, 00:25:35.023 "unmap": true, 00:25:35.023 "flush": false, 00:25:35.023 "reset": true, 00:25:35.023 "nvme_admin": false, 00:25:35.023 "nvme_io": false, 00:25:35.023 "nvme_io_md": false, 00:25:35.023 "write_zeroes": true, 00:25:35.023 "zcopy": false, 00:25:35.023 "get_zone_info": false, 00:25:35.023 "zone_management": false, 00:25:35.023 "zone_append": false, 00:25:35.023 "compare": false, 00:25:35.023 "compare_and_write": false, 00:25:35.023 "abort": false, 00:25:35.023 "seek_hole": true, 00:25:35.023 "seek_data": true, 00:25:35.023 "copy": false, 00:25:35.023 "nvme_iov_md": false 00:25:35.023 }, 00:25:35.023 "driver_specific": { 00:25:35.023 "lvol": { 00:25:35.023 "lvol_store_uuid": "c23d7cd5-b38c-433b-82c5-dc2f13784d5e", 00:25:35.023 "base_bdev": "Nvme0n1", 00:25:35.023 "thin_provision": true, 00:25:35.023 "num_allocated_clusters": 0, 00:25:35.023 "snapshot": false, 00:25:35.023 "clone": false, 00:25:35.023 "esnap_clone": false 00:25:35.023 } 00:25:35.023 } 00:25:35.023 } 00:25:35.023 ] 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:35.023 10:32:59 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:25:35.023 10:32:59 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:25:35.023 [2024-07-15 10:32:59.677457] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:25:35.023 COMP_lvs0/lv0 00:25:35.023 10:32:59 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@897 -- # local bdev_name=COMP_lvs0/lv0 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@899 -- # local i 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:35.023 10:32:59 compress_isal -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:35.281 10:32:59 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:25:35.281 [ 00:25:35.281 { 00:25:35.281 "name": "COMP_lvs0/lv0", 00:25:35.281 "aliases": [ 00:25:35.281 "539e83c5-6d6f-51d8-9698-52d516e3e5aa" 00:25:35.281 ], 00:25:35.281 "product_name": "compress", 00:25:35.281 "block_size": 512, 00:25:35.281 "num_blocks": 200704, 00:25:35.281 "uuid": "539e83c5-6d6f-51d8-9698-52d516e3e5aa", 00:25:35.281 "assigned_rate_limits": { 00:25:35.281 "rw_ios_per_sec": 0, 00:25:35.281 "rw_mbytes_per_sec": 0, 00:25:35.281 "r_mbytes_per_sec": 0, 00:25:35.281 "w_mbytes_per_sec": 0 00:25:35.281 }, 00:25:35.281 "claimed": false, 00:25:35.281 "zoned": false, 00:25:35.281 "supported_io_types": { 00:25:35.281 "read": true, 00:25:35.281 "write": true, 00:25:35.281 "unmap": false, 00:25:35.281 "flush": false, 00:25:35.281 "reset": false, 00:25:35.281 "nvme_admin": false, 00:25:35.281 "nvme_io": false, 00:25:35.281 "nvme_io_md": false, 00:25:35.281 "write_zeroes": true, 00:25:35.281 "zcopy": false, 00:25:35.281 "get_zone_info": false, 00:25:35.281 "zone_management": false, 00:25:35.281 "zone_append": false, 00:25:35.281 "compare": false, 00:25:35.281 "compare_and_write": false, 00:25:35.281 "abort": false, 00:25:35.281 "seek_hole": false, 00:25:35.281 "seek_data": false, 00:25:35.281 "copy": false, 00:25:35.281 "nvme_iov_md": false 00:25:35.281 }, 00:25:35.281 "driver_specific": { 00:25:35.281 "compress": { 00:25:35.281 "name": "COMP_lvs0/lv0", 00:25:35.281 "base_bdev_name": "f8bb1a47-9d42-492d-b93b-39a6d99c1b26" 00:25:35.281 } 00:25:35.281 } 00:25:35.281 } 00:25:35.282 ] 00:25:35.282 10:33:00 compress_isal -- common/autotest_common.sh@905 -- # return 0 00:25:35.282 10:33:00 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:35.540 I/O targets: 00:25:35.540 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:25:35.540 00:25:35.540 00:25:35.540 CUnit - A unit testing framework for C - Version 2.1-3 00:25:35.540 http://cunit.sourceforge.net/ 00:25:35.540 00:25:35.540 00:25:35.540 Suite: bdevio tests on: COMP_lvs0/lv0 00:25:35.540 Test: blockdev write read block ...passed 00:25:35.540 Test: blockdev write zeroes read block ...passed 00:25:35.540 Test: blockdev write zeroes read no split ...passed 00:25:35.540 Test: blockdev write zeroes read split ...passed 00:25:35.540 Test: blockdev write zeroes read split partial ...passed 00:25:35.540 Test: blockdev reset ...[2024-07-15 10:33:00.200162] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:25:35.540 passed 00:25:35.540 Test: blockdev write read 8 blocks ...passed 00:25:35.540 Test: blockdev write read size > 128k ...passed 00:25:35.540 Test: blockdev write read invalid size ...passed 00:25:35.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:35.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:35.540 Test: blockdev write read max offset ...passed 00:25:35.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:35.540 Test: blockdev writev readv 8 blocks ...passed 00:25:35.540 Test: blockdev writev readv 30 x 1block ...passed 00:25:35.540 Test: blockdev writev readv block ...passed 00:25:35.540 Test: blockdev writev readv size > 128k ...passed 00:25:35.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:35.540 Test: blockdev comparev and writev ...passed 00:25:35.540 Test: blockdev nvme passthru rw ...passed 00:25:35.540 Test: blockdev nvme passthru vendor specific ...passed 00:25:35.540 Test: blockdev nvme admin passthru ...passed 00:25:35.540 Test: blockdev copy ...passed 00:25:35.540 00:25:35.540 Run Summary: Type Total Ran Passed Failed Inactive 00:25:35.540 suites 1 1 n/a 0 0 00:25:35.540 tests 23 23 23 0 0 00:25:35.540 asserts 130 130 130 0 n/a 00:25:35.540 00:25:35.540 Elapsed time = 0.189 seconds 00:25:35.540 0 00:25:35.540 10:33:00 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:25:35.540 10:33:00 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:25:35.798 10:33:00 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:25:36.057 10:33:00 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:25:36.057 10:33:00 compress_isal -- compress/compress.sh@62 -- # killprocess 1926702 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@948 -- # '[' -z 1926702 ']' 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@952 -- # kill -0 1926702 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@953 -- # uname 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1926702 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1926702' 00:25:36.057 killing process with pid 1926702 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@967 -- # kill 1926702 00:25:36.057 10:33:00 compress_isal -- common/autotest_common.sh@972 -- # wait 1926702 00:25:38.588 10:33:03 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:25:38.588 10:33:03 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:25:38.588 00:25:38.588 real 0m46.837s 00:25:38.588 user 1m45.084s 00:25:38.588 sys 0m3.470s 00:25:38.588 10:33:03 compress_isal -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:38.588 10:33:03 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:25:38.588 ************************************ 00:25:38.588 END TEST compress_isal 00:25:38.588 ************************************ 00:25:38.588 10:33:03 -- common/autotest_common.sh@1142 -- # return 0 00:25:38.588 10:33:03 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:38.588 10:33:03 -- spdk/autotest.sh@356 -- # '[' 1 -eq 1 ']' 00:25:38.588 10:33:03 -- spdk/autotest.sh@357 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:38.588 10:33:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:38.588 10:33:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.588 10:33:03 -- common/autotest_common.sh@10 -- # set +x 00:25:38.588 ************************************ 00:25:38.588 START TEST blockdev_crypto_aesni 00:25:38.588 ************************************ 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:25:38.588 * Looking for test storage... 00:25:38.588 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # uname -s 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # test_type=crypto_aesni 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # crypto_device= 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # dek= 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # env_ctx= 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == bdev ]] 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # [[ crypto_aesni == crypto_* ]] 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1928554 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:25:38.588 10:33:03 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 1928554 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@829 -- # '[' -z 1928554 ']' 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.588 10:33:03 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:38.846 [2024-07-15 10:33:03.396043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:38.846 [2024-07-15 10:33:03.396093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928554 ] 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:38.846 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:38.846 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:38.846 [2024-07-15 10:33:03.488907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.846 [2024-07-15 10:33:03.562757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.410 10:33:04 blockdev_crypto_aesni -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.410 10:33:04 blockdev_crypto_aesni -- common/autotest_common.sh@862 -- # return 0 00:25:39.410 10:33:04 blockdev_crypto_aesni -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:25:39.410 10:33:04 blockdev_crypto_aesni -- bdev/blockdev.sh@705 -- # setup_crypto_aesni_conf 00:25:39.410 10:33:04 blockdev_crypto_aesni -- bdev/blockdev.sh@146 -- # rpc_cmd 00:25:39.410 10:33:04 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.410 10:33:04 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:39.667 [2024-07-15 10:33:04.200738] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:39.667 [2024-07-15 10:33:04.208770] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:39.667 [2024-07-15 10:33:04.216783] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:39.667 [2024-07-15 10:33:04.279029] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:42.196 true 00:25:42.196 true 00:25:42.196 true 00:25:42.196 true 00:25:42.196 Malloc0 00:25:42.196 Malloc1 00:25:42.196 Malloc2 00:25:42.196 Malloc3 00:25:42.196 [2024-07-15 10:33:06.566185] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:42.196 crypto_ram 00:25:42.196 [2024-07-15 10:33:06.574202] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:42.196 crypto_ram2 00:25:42.196 [2024-07-15 10:33:06.582220] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:42.196 crypto_ram3 00:25:42.196 [2024-07-15 10:33:06.590238] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:42.196 crypto_ram4 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # cat 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:42.196 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:25:42.196 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # jq -r .name 00:25:42.197 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "91e4ea2f-954c-5b1f-a6b5-fe916642822e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "91e4ea2f-954c-5b1f-a6b5-fe916642822e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "17ae5eec-f954-52e5-9138-317ecd15fa16"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "17ae5eec-f954-52e5-9138-317ecd15fa16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "c625f48e-b6f8-5211-bbef-cd960c2c9779"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c625f48e-b6f8-5211-bbef-cd960c2c9779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "2440ff81-e5a5-5987-aea7-4da707c48594"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2440ff81-e5a5-5987-aea7-4da707c48594",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:25:42.197 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:25:42.197 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:25:42.197 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:25:42.197 10:33:06 blockdev_crypto_aesni -- bdev/blockdev.sh@754 -- # killprocess 1928554 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@948 -- # '[' -z 1928554 ']' 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@952 -- # kill -0 1928554 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # uname 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1928554 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1928554' 00:25:42.197 killing process with pid 1928554 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@967 -- # kill 1928554 00:25:42.197 10:33:06 blockdev_crypto_aesni -- common/autotest_common.sh@972 -- # wait 1928554 00:25:42.764 10:33:07 blockdev_crypto_aesni -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:42.764 10:33:07 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:42.764 10:33:07 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:25:42.765 10:33:07 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.765 10:33:07 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:42.765 ************************************ 00:25:42.765 START TEST bdev_hello_world 00:25:42.765 ************************************ 00:25:42.765 10:33:07 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:25:42.765 [2024-07-15 10:33:07.398401] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:42.765 [2024-07-15 10:33:07.398444] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929607 ] 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:42.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:42.765 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:42.765 [2024-07-15 10:33:07.487654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.024 [2024-07-15 10:33:07.555842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.024 [2024-07-15 10:33:07.576744] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:43.024 [2024-07-15 10:33:07.584768] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:43.024 [2024-07-15 10:33:07.592789] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:43.024 [2024-07-15 10:33:07.686421] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:45.561 [2024-07-15 10:33:09.833012] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:45.561 [2024-07-15 10:33:09.833074] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:45.561 [2024-07-15 10:33:09.833086] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:45.561 [2024-07-15 10:33:09.841029] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:45.561 [2024-07-15 10:33:09.841041] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:45.561 [2024-07-15 10:33:09.841049] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:45.561 [2024-07-15 10:33:09.849060] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:45.561 [2024-07-15 10:33:09.849072] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:45.561 [2024-07-15 10:33:09.849079] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:45.561 [2024-07-15 10:33:09.857079] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:45.561 [2024-07-15 10:33:09.857090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:45.561 [2024-07-15 10:33:09.857098] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:45.561 [2024-07-15 10:33:09.924875] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:45.561 [2024-07-15 10:33:09.924916] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:25:45.561 [2024-07-15 10:33:09.924930] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:45.561 [2024-07-15 10:33:09.925784] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:45.561 [2024-07-15 10:33:09.925843] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:45.561 [2024-07-15 10:33:09.925855] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:45.561 [2024-07-15 10:33:09.925888] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:45.561 00:25:45.561 [2024-07-15 10:33:09.925908] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:45.561 00:25:45.561 real 0m2.874s 00:25:45.561 user 0m2.572s 00:25:45.561 sys 0m0.270s 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:45.561 ************************************ 00:25:45.561 END TEST bdev_hello_world 00:25:45.561 ************************************ 00:25:45.561 10:33:10 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:45.561 10:33:10 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:25:45.561 10:33:10 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.561 10:33:10 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.561 10:33:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:45.561 ************************************ 00:25:45.561 START TEST bdev_bounds 00:25:45.561 ************************************ 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1930152 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1930152' 00:25:45.561 Process bdevio pid: 1930152 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1930152 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1930152 ']' 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.561 10:33:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:45.820 [2024-07-15 10:33:10.358381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:45.820 [2024-07-15 10:33:10.358425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930152 ] 00:25:45.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.820 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:45.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.820 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:45.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:45.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:45.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:45.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:45.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:45.821 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:45.821 [2024-07-15 10:33:10.450586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.821 [2024-07-15 10:33:10.526361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.821 [2024-07-15 10:33:10.526470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.821 [2024-07-15 10:33:10.526472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.821 [2024-07-15 10:33:10.547454] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:45.821 [2024-07-15 10:33:10.555477] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:45.821 [2024-07-15 10:33:10.563497] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:46.080 [2024-07-15 10:33:10.662624] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:48.617 [2024-07-15 10:33:12.810610] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:48.617 [2024-07-15 10:33:12.810673] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:48.617 [2024-07-15 10:33:12.810684] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:48.617 [2024-07-15 10:33:12.818628] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:48.617 [2024-07-15 10:33:12.818641] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:48.617 [2024-07-15 10:33:12.818649] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:48.617 [2024-07-15 10:33:12.826646] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:48.617 [2024-07-15 10:33:12.826658] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:48.617 [2024-07-15 10:33:12.826666] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:48.617 [2024-07-15 10:33:12.834670] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:48.617 [2024-07-15 10:33:12.834681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:48.617 [2024-07-15 10:33:12.834689] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:48.617 10:33:12 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.617 10:33:12 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:25:48.617 10:33:12 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:48.617 I/O targets: 00:25:48.617 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:25:48.617 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:25:48.617 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:25:48.617 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:25:48.617 00:25:48.617 00:25:48.617 CUnit - A unit testing framework for C - Version 2.1-3 00:25:48.617 http://cunit.sourceforge.net/ 00:25:48.617 00:25:48.617 00:25:48.617 Suite: bdevio tests on: crypto_ram4 00:25:48.617 Test: blockdev write read block ...passed 00:25:48.617 Test: blockdev write zeroes read block ...passed 00:25:48.617 Test: blockdev write zeroes read no split ...passed 00:25:48.617 Test: blockdev write zeroes read split ...passed 00:25:48.617 Test: blockdev write zeroes read split partial ...passed 00:25:48.617 Test: blockdev reset ...passed 00:25:48.617 Test: blockdev write read 8 blocks ...passed 00:25:48.617 Test: blockdev write read size > 128k ...passed 00:25:48.617 Test: blockdev write read invalid size ...passed 00:25:48.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:48.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:48.617 Test: blockdev write read max offset ...passed 00:25:48.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:48.617 Test: blockdev writev readv 8 blocks ...passed 00:25:48.617 Test: blockdev writev readv 30 x 1block ...passed 00:25:48.617 Test: blockdev writev readv block ...passed 00:25:48.617 Test: blockdev writev readv size > 128k ...passed 00:25:48.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:48.617 Test: blockdev comparev and writev ...passed 00:25:48.617 Test: blockdev nvme passthru rw ...passed 00:25:48.617 Test: blockdev nvme passthru vendor specific ...passed 00:25:48.617 Test: blockdev nvme admin passthru ...passed 00:25:48.617 Test: blockdev copy ...passed 00:25:48.617 Suite: bdevio tests on: crypto_ram3 00:25:48.617 Test: blockdev write read block ...passed 00:25:48.617 Test: blockdev write zeroes read block ...passed 00:25:48.617 Test: blockdev write zeroes read no split ...passed 00:25:48.617 Test: blockdev write zeroes read split ...passed 00:25:48.617 Test: blockdev write zeroes read split partial ...passed 00:25:48.617 Test: blockdev reset ...passed 00:25:48.617 Test: blockdev write read 8 blocks ...passed 00:25:48.617 Test: blockdev write read size > 128k ...passed 00:25:48.617 Test: blockdev write read invalid size ...passed 00:25:48.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:48.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:48.617 Test: blockdev write read max offset ...passed 00:25:48.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:48.617 Test: blockdev writev readv 8 blocks ...passed 00:25:48.617 Test: blockdev writev readv 30 x 1block ...passed 00:25:48.617 Test: blockdev writev readv block ...passed 00:25:48.617 Test: blockdev writev readv size > 128k ...passed 00:25:48.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:48.617 Test: blockdev comparev and writev ...passed 00:25:48.617 Test: blockdev nvme passthru rw ...passed 00:25:48.617 Test: blockdev nvme passthru vendor specific ...passed 00:25:48.617 Test: blockdev nvme admin passthru ...passed 00:25:48.617 Test: blockdev copy ...passed 00:25:48.617 Suite: bdevio tests on: crypto_ram2 00:25:48.617 Test: blockdev write read block ...passed 00:25:48.617 Test: blockdev write zeroes read block ...passed 00:25:48.617 Test: blockdev write zeroes read no split ...passed 00:25:48.617 Test: blockdev write zeroes read split ...passed 00:25:48.617 Test: blockdev write zeroes read split partial ...passed 00:25:48.617 Test: blockdev reset ...passed 00:25:48.617 Test: blockdev write read 8 blocks ...passed 00:25:48.617 Test: blockdev write read size > 128k ...passed 00:25:48.617 Test: blockdev write read invalid size ...passed 00:25:48.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:48.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:48.617 Test: blockdev write read max offset ...passed 00:25:48.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:48.617 Test: blockdev writev readv 8 blocks ...passed 00:25:48.617 Test: blockdev writev readv 30 x 1block ...passed 00:25:48.617 Test: blockdev writev readv block ...passed 00:25:48.617 Test: blockdev writev readv size > 128k ...passed 00:25:48.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:48.617 Test: blockdev comparev and writev ...passed 00:25:48.617 Test: blockdev nvme passthru rw ...passed 00:25:48.617 Test: blockdev nvme passthru vendor specific ...passed 00:25:48.617 Test: blockdev nvme admin passthru ...passed 00:25:48.617 Test: blockdev copy ...passed 00:25:48.617 Suite: bdevio tests on: crypto_ram 00:25:48.617 Test: blockdev write read block ...passed 00:25:48.617 Test: blockdev write zeroes read block ...passed 00:25:48.617 Test: blockdev write zeroes read no split ...passed 00:25:48.617 Test: blockdev write zeroes read split ...passed 00:25:48.617 Test: blockdev write zeroes read split partial ...passed 00:25:48.617 Test: blockdev reset ...passed 00:25:48.617 Test: blockdev write read 8 blocks ...passed 00:25:48.617 Test: blockdev write read size > 128k ...passed 00:25:48.617 Test: blockdev write read invalid size ...passed 00:25:48.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:48.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:48.617 Test: blockdev write read max offset ...passed 00:25:48.617 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:48.617 Test: blockdev writev readv 8 blocks ...passed 00:25:48.617 Test: blockdev writev readv 30 x 1block ...passed 00:25:48.617 Test: blockdev writev readv block ...passed 00:25:48.617 Test: blockdev writev readv size > 128k ...passed 00:25:48.617 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:48.617 Test: blockdev comparev and writev ...passed 00:25:48.617 Test: blockdev nvme passthru rw ...passed 00:25:48.617 Test: blockdev nvme passthru vendor specific ...passed 00:25:48.617 Test: blockdev nvme admin passthru ...passed 00:25:48.617 Test: blockdev copy ...passed 00:25:48.617 00:25:48.617 Run Summary: Type Total Ran Passed Failed Inactive 00:25:48.617 suites 4 4 n/a 0 0 00:25:48.617 tests 92 92 92 0 0 00:25:48.617 asserts 520 520 520 0 n/a 00:25:48.617 00:25:48.617 Elapsed time = 0.507 seconds 00:25:48.617 0 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1930152 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1930152 ']' 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1930152 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930152 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930152' 00:25:48.617 killing process with pid 1930152 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1930152 00:25:48.617 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1930152 00:25:48.875 10:33:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:25:48.875 00:25:48.875 real 0m3.326s 00:25:48.875 user 0m9.306s 00:25:48.875 sys 0m0.459s 00:25:48.875 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:48.875 10:33:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:48.875 ************************************ 00:25:48.875 END TEST bdev_bounds 00:25:48.875 ************************************ 00:25:49.134 10:33:13 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:49.134 10:33:13 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:25:49.134 10:33:13 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:49.134 10:33:13 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.134 10:33:13 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:49.134 ************************************ 00:25:49.134 START TEST bdev_nbd 00:25:49.134 ************************************ 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1930708 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1930708 /var/tmp/spdk-nbd.sock 00:25:49.134 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1930708 ']' 00:25:49.135 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:49.135 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.135 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:49.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:49.135 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.135 10:33:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:49.135 [2024-07-15 10:33:13.782812] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:49.135 [2024-07-15 10:33:13.782863] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:49.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:49.135 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:49.135 [2024-07-15 10:33:13.874800] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.394 [2024-07-15 10:33:13.943611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.394 [2024-07-15 10:33:13.964482] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:25:49.394 [2024-07-15 10:33:13.972504] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:25:49.394 [2024-07-15 10:33:13.980522] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:25:49.394 [2024-07-15 10:33:14.082234] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:25:51.926 [2024-07-15 10:33:16.225268] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:25:51.926 [2024-07-15 10:33:16.225327] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:25:51.926 [2024-07-15 10:33:16.225339] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:51.926 [2024-07-15 10:33:16.233287] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:25:51.926 [2024-07-15 10:33:16.233300] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:25:51.926 [2024-07-15 10:33:16.233308] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:51.926 [2024-07-15 10:33:16.241306] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:25:51.926 [2024-07-15 10:33:16.241318] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:25:51.926 [2024-07-15 10:33:16.241325] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:51.926 [2024-07-15 10:33:16.249326] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:25:51.926 [2024-07-15 10:33:16.249337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:25:51.926 [2024-07-15 10:33:16.249344] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:51.926 1+0 records in 00:25:51.926 1+0 records out 00:25:51.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243258 s, 16.8 MB/s 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:51.926 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.185 1+0 records in 00:25:52.185 1+0 records out 00:25:52.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221034 s, 18.5 MB/s 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.185 1+0 records in 00:25:52.185 1+0 records out 00:25:52.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294552 s, 13.9 MB/s 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:52.185 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:52.444 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:52.444 10:33:16 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:52.444 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:52.444 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:52.444 10:33:16 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.444 1+0 records in 00:25:52.444 1+0 records out 00:25:52.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275381 s, 14.9 MB/s 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:25:52.444 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:52.703 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:52.703 { 00:25:52.703 "nbd_device": "/dev/nbd0", 00:25:52.703 "bdev_name": "crypto_ram" 00:25:52.703 }, 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd1", 00:25:52.704 "bdev_name": "crypto_ram2" 00:25:52.704 }, 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd2", 00:25:52.704 "bdev_name": "crypto_ram3" 00:25:52.704 }, 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd3", 00:25:52.704 "bdev_name": "crypto_ram4" 00:25:52.704 } 00:25:52.704 ]' 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd0", 00:25:52.704 "bdev_name": "crypto_ram" 00:25:52.704 }, 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd1", 00:25:52.704 "bdev_name": "crypto_ram2" 00:25:52.704 }, 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd2", 00:25:52.704 "bdev_name": "crypto_ram3" 00:25:52.704 }, 00:25:52.704 { 00:25:52.704 "nbd_device": "/dev/nbd3", 00:25:52.704 "bdev_name": "crypto_ram4" 00:25:52.704 } 00:25:52.704 ]' 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.704 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.963 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:53.222 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:53.222 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:53.222 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:53.222 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:53.222 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:53.223 10:33:17 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.491 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:25:53.781 /dev/nbd0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:53.781 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.040 1+0 records in 00:25:54.040 1+0 records out 00:25:54.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272727 s, 15.0 MB/s 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:25:54.040 /dev/nbd1 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.040 1+0 records in 00:25:54.040 1+0 records out 00:25:54.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279892 s, 14.6 MB/s 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:54.040 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:25:54.300 /dev/nbd10 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.300 1+0 records in 00:25:54.300 1+0 records out 00:25:54.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233827 s, 17.5 MB/s 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:54.300 10:33:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:25:54.559 /dev/nbd11 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.559 1+0 records in 00:25:54.559 1+0 records out 00:25:54.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262305 s, 15.6 MB/s 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.559 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd0", 00:25:54.818 "bdev_name": "crypto_ram" 00:25:54.818 }, 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd1", 00:25:54.818 "bdev_name": "crypto_ram2" 00:25:54.818 }, 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd10", 00:25:54.818 "bdev_name": "crypto_ram3" 00:25:54.818 }, 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd11", 00:25:54.818 "bdev_name": "crypto_ram4" 00:25:54.818 } 00:25:54.818 ]' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd0", 00:25:54.818 "bdev_name": "crypto_ram" 00:25:54.818 }, 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd1", 00:25:54.818 "bdev_name": "crypto_ram2" 00:25:54.818 }, 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd10", 00:25:54.818 "bdev_name": "crypto_ram3" 00:25:54.818 }, 00:25:54.818 { 00:25:54.818 "nbd_device": "/dev/nbd11", 00:25:54.818 "bdev_name": "crypto_ram4" 00:25:54.818 } 00:25:54.818 ]' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:54.818 /dev/nbd1 00:25:54.818 /dev/nbd10 00:25:54.818 /dev/nbd11' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:54.818 /dev/nbd1 00:25:54.818 /dev/nbd10 00:25:54.818 /dev/nbd11' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:54.818 256+0 records in 00:25:54.818 256+0 records out 00:25:54.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107618 s, 97.4 MB/s 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:54.818 256+0 records in 00:25:54.818 256+0 records out 00:25:54.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0407395 s, 25.7 MB/s 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:54.818 256+0 records in 00:25:54.818 256+0 records out 00:25:54.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0411005 s, 25.5 MB/s 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:25:54.818 256+0 records in 00:25:54.818 256+0 records out 00:25:54.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0395793 s, 26.5 MB/s 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:54.818 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:25:55.078 256+0 records in 00:25:55.078 256+0 records out 00:25:55.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250433 s, 41.9 MB/s 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.078 10:33:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.337 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.596 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:25:55.855 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:55.855 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.855 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:55.856 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:56.114 malloc_lvol_verify 00:25:56.114 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:56.373 ee352e83-854b-4e4d-a9a8-052760bc50b6 00:25:56.373 10:33:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:56.373 815c0625-f421-413b-9626-06c91122e543 00:25:56.373 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:56.631 /dev/nbd0 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:56.631 mke2fs 1.46.5 (30-Dec-2021) 00:25:56.631 Discarding device blocks: 0/4096 done 00:25:56.631 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:56.631 00:25:56.631 Allocating group tables: 0/1 done 00:25:56.631 Writing inode tables: 0/1 done 00:25:56.631 Creating journal (1024 blocks): done 00:25:56.631 Writing superblocks and filesystem accounting information: 0/1 done 00:25:56.631 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:56.631 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1930708 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1930708 ']' 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1930708 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1930708 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1930708' 00:25:56.890 killing process with pid 1930708 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1930708 00:25:56.890 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1930708 00:25:57.148 10:33:21 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:25:57.148 00:25:57.148 real 0m8.148s 00:25:57.148 user 0m10.259s 00:25:57.148 sys 0m3.117s 00:25:57.148 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.148 10:33:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:57.148 ************************************ 00:25:57.148 END TEST bdev_nbd 00:25:57.148 ************************************ 00:25:57.148 10:33:21 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:25:57.148 10:33:21 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:25:57.148 10:33:21 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = nvme ']' 00:25:57.148 10:33:21 blockdev_crypto_aesni -- bdev/blockdev.sh@764 -- # '[' crypto_aesni = gpt ']' 00:25:57.148 10:33:21 blockdev_crypto_aesni -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:25:57.148 10:33:21 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.148 10:33:21 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.148 10:33:21 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:25:57.407 ************************************ 00:25:57.407 START TEST bdev_fio 00:25:57.407 ************************************ 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:25:57.407 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:25:57.407 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:25:57.408 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:25:57.408 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:25:57.408 10:33:21 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram4]' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram4 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:57.408 ************************************ 00:25:57.408 START TEST bdev_fio_rw_verify 00:25:57.408 ************************************ 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:57.408 10:33:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:25:57.972 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:57.972 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:57.972 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:57.972 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:57.972 fio-3.35 00:25:57.972 Starting 4 threads 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:57.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.972 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:57.973 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:57.973 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:12.851 00:26:12.851 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1932895: Mon Jul 15 10:33:35 2024 00:26:12.851 read: IOPS=30.5k, BW=119MiB/s (125MB/s)(1190MiB/10001msec) 00:26:12.851 slat (usec): min=11, max=1217, avg=44.69, stdev=31.29 00:26:12.851 clat (usec): min=8, max=2004, avg=234.84, stdev=170.52 00:26:12.851 lat (usec): min=30, max=2057, avg=279.53, stdev=189.43 00:26:12.851 clat percentiles (usec): 00:26:12.851 | 50.000th=[ 192], 99.000th=[ 865], 99.900th=[ 1029], 99.990th=[ 1139], 00:26:12.851 | 99.999th=[ 1598] 00:26:12.851 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(1276MiB/9729msec); 0 zone resets 00:26:12.851 slat (usec): min=16, max=484, avg=53.40, stdev=30.79 00:26:12.851 clat (usec): min=22, max=2840, avg=284.62, stdev=197.21 00:26:12.851 lat (usec): min=49, max=3110, avg=338.02, stdev=215.37 00:26:12.851 clat percentiles (usec): 00:26:12.851 | 50.000th=[ 243], 99.000th=[ 1004], 99.900th=[ 1205], 99.990th=[ 1434], 00:26:12.851 | 99.999th=[ 2638] 00:26:12.851 bw ( KiB/s): min=109976, max=175654, per=97.62%, avg=131087.89, stdev=4485.75, samples=76 00:26:12.851 iops : min=27494, max=43913, avg=32771.95, stdev=1121.43, samples=76 00:26:12.851 lat (usec) : 10=0.01%, 20=0.01%, 50=3.52%, 100=11.54%, 250=44.82% 00:26:12.851 lat (usec) : 500=29.62%, 750=7.37%, 1000=2.52% 00:26:12.851 lat (msec) : 2=0.60%, 4=0.01% 00:26:12.851 cpu : usr=99.69%, sys=0.00%, ctx=65, majf=0, minf=265 00:26:12.851 IO depths : 1=10.4%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:12.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.851 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:12.851 issued rwts: total=304606,326622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:12.851 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:12.851 00:26:12.851 Run status group 0 (all jobs): 00:26:12.851 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=1190MiB (1248MB), run=10001-10001msec 00:26:12.851 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1276MiB (1338MB), run=9729-9729msec 00:26:12.851 00:26:12.851 real 0m13.325s 00:26:12.851 user 0m50.743s 00:26:12.851 sys 0m0.452s 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 ************************************ 00:26:12.851 END TEST bdev_fio_rw_verify 00:26:12.851 ************************************ 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:26:12.851 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "91e4ea2f-954c-5b1f-a6b5-fe916642822e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "91e4ea2f-954c-5b1f-a6b5-fe916642822e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "17ae5eec-f954-52e5-9138-317ecd15fa16"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "17ae5eec-f954-52e5-9138-317ecd15fa16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "c625f48e-b6f8-5211-bbef-cd960c2c9779"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c625f48e-b6f8-5211-bbef-cd960c2c9779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "2440ff81-e5a5-5987-aea7-4da707c48594"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2440ff81-e5a5-5987-aea7-4da707c48594",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:26:12.852 crypto_ram2 00:26:12.852 crypto_ram3 00:26:12.852 crypto_ram4 ]] 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "91e4ea2f-954c-5b1f-a6b5-fe916642822e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "91e4ea2f-954c-5b1f-a6b5-fe916642822e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "17ae5eec-f954-52e5-9138-317ecd15fa16"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "17ae5eec-f954-52e5-9138-317ecd15fa16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "c625f48e-b6f8-5211-bbef-cd960c2c9779"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "c625f48e-b6f8-5211-bbef-cd960c2c9779",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "2440ff81-e5a5-5987-aea7-4da707c48594"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2440ff81-e5a5-5987-aea7-4da707c48594",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram4]' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram4 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 ************************************ 00:26:12.852 START TEST bdev_fio_trim 00:26:12.852 ************************************ 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:12.852 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:12.853 10:33:35 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:12.853 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:12.853 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:12.853 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:12.853 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:12.853 fio-3.35 00:26:12.853 Starting 4 threads 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:12.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:12.853 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:25.055 00:26:25.055 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1935205: Mon Jul 15 10:33:48 2024 00:26:25.055 write: IOPS=51.4k, BW=201MiB/s (210MB/s)(2007MiB/10001msec); 0 zone resets 00:26:25.055 slat (usec): min=10, max=466, avg=45.10, stdev=32.44 00:26:25.055 clat (usec): min=21, max=1865, avg=196.82, stdev=145.67 00:26:25.055 lat (usec): min=32, max=2130, avg=241.92, stdev=166.74 00:26:25.055 clat percentiles (usec): 00:26:25.055 | 50.000th=[ 155], 99.000th=[ 766], 99.900th=[ 963], 99.990th=[ 1139], 00:26:25.055 | 99.999th=[ 1696] 00:26:25.055 bw ( KiB/s): min=194048, max=270558, per=100.00%, avg=205937.58, stdev=5167.82, samples=76 00:26:25.055 iops : min=48512, max=67639, avg=51484.37, stdev=1291.93, samples=76 00:26:25.055 trim: IOPS=51.4k, BW=201MiB/s (210MB/s)(2007MiB/10001msec); 0 zone resets 00:26:25.055 slat (usec): min=4, max=935, avg=12.00, stdev= 6.06 00:26:25.055 clat (usec): min=33, max=1289, avg=186.04, stdev=95.83 00:26:25.055 lat (usec): min=37, max=1322, avg=198.04, stdev=98.42 00:26:25.055 clat percentiles (usec): 00:26:25.055 | 50.000th=[ 167], 99.000th=[ 537], 99.900th=[ 660], 99.990th=[ 799], 00:26:25.055 | 99.999th=[ 1188] 00:26:25.055 bw ( KiB/s): min=194040, max=270590, per=100.00%, avg=205938.84, stdev=5169.05, samples=76 00:26:25.055 iops : min=48510, max=67647, avg=51484.68, stdev=1292.24, samples=76 00:26:25.055 lat (usec) : 50=2.32%, 100=16.28%, 250=61.42%, 500=16.65%, 750=2.76% 00:26:25.055 lat (usec) : 1000=0.55% 00:26:25.055 lat (msec) : 2=0.03% 00:26:25.055 cpu : usr=99.69%, sys=0.00%, ctx=53, majf=0, minf=89 00:26:25.055 IO depths : 1=8.3%, 2=26.2%, 4=52.4%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:25.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.055 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:25.055 issued rwts: total=0,513755,513755,0 short=0,0,0,0 dropped=0,0,0,0 00:26:25.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:25.055 00:26:25.055 Run status group 0 (all jobs): 00:26:25.055 WRITE: bw=201MiB/s (210MB/s), 201MiB/s-201MiB/s (210MB/s-210MB/s), io=2007MiB (2104MB), run=10001-10001msec 00:26:25.055 TRIM: bw=201MiB/s (210MB/s), 201MiB/s-201MiB/s (210MB/s-210MB/s), io=2007MiB (2104MB), run=10001-10001msec 00:26:25.055 00:26:25.055 real 0m13.306s 00:26:25.055 user 0m50.887s 00:26:25.055 sys 0m0.460s 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:26:25.055 ************************************ 00:26:25.055 END TEST bdev_fio_trim 00:26:25.055 ************************************ 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:26:25.055 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:26:25.055 00:26:25.055 real 0m26.994s 00:26:25.055 user 1m41.811s 00:26:25.055 sys 0m1.119s 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.055 10:33:48 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:25.055 ************************************ 00:26:25.055 END TEST bdev_fio 00:26:25.055 ************************************ 00:26:25.055 10:33:48 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:25.055 10:33:48 blockdev_crypto_aesni -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:25.055 10:33:48 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:25.055 10:33:48 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:25.055 10:33:48 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.055 10:33:48 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:25.055 ************************************ 00:26:25.055 START TEST bdev_verify 00:26:25.055 ************************************ 00:26:25.055 10:33:49 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:25.055 [2024-07-15 10:33:49.076097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:25.055 [2024-07-15 10:33:49.076136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1937022 ] 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.055 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:25.055 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:25.056 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:25.056 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:25.056 [2024-07-15 10:33:49.163996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:25.056 [2024-07-15 10:33:49.234173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.056 [2024-07-15 10:33:49.234176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.056 [2024-07-15 10:33:49.255127] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:25.056 [2024-07-15 10:33:49.263139] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:25.056 [2024-07-15 10:33:49.271163] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:25.056 [2024-07-15 10:33:49.368299] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:26.955 [2024-07-15 10:33:51.523733] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:26.955 [2024-07-15 10:33:51.523797] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:26.955 [2024-07-15 10:33:51.523808] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:26.955 [2024-07-15 10:33:51.531752] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:26.955 [2024-07-15 10:33:51.531769] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:26.955 [2024-07-15 10:33:51.531778] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:26.955 [2024-07-15 10:33:51.539771] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:26.955 [2024-07-15 10:33:51.539785] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:26.955 [2024-07-15 10:33:51.539793] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:26.955 [2024-07-15 10:33:51.547793] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:26.955 [2024-07-15 10:33:51.547806] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:26.955 [2024-07-15 10:33:51.547813] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:26.955 Running I/O for 5 seconds... 00:26:32.249 00:26:32.249 Latency(us) 00:26:32.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.249 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x0 length 0x1000 00:26:32.249 crypto_ram : 5.04 742.26 2.90 0.00 0.00 171811.57 1638.40 114923.93 00:26:32.249 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x1000 length 0x1000 00:26:32.249 crypto_ram : 5.04 750.09 2.93 0.00 0.00 170070.11 2123.37 114923.93 00:26:32.249 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x0 length 0x1000 00:26:32.249 crypto_ram2 : 5.04 746.82 2.92 0.00 0.00 170583.16 1500.77 106954.75 00:26:32.249 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x1000 length 0x1000 00:26:32.249 crypto_ram2 : 5.04 753.10 2.94 0.00 0.00 169153.33 2215.12 106535.32 00:26:32.249 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x0 length 0x1000 00:26:32.249 crypto_ram3 : 5.04 5870.47 22.93 0.00 0.00 21668.91 1887.44 18245.22 00:26:32.249 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x1000 length 0x1000 00:26:32.249 crypto_ram3 : 5.03 5899.40 23.04 0.00 0.00 21559.35 3014.66 18245.22 00:26:32.249 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x0 length 0x1000 00:26:32.249 crypto_ram4 : 5.04 5871.05 22.93 0.00 0.00 21632.77 2044.72 16357.79 00:26:32.249 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:32.249 Verification LBA range: start 0x1000 length 0x1000 00:26:32.249 crypto_ram4 : 5.03 5898.55 23.04 0.00 0.00 21528.40 2791.83 15938.36 00:26:32.249 =================================================================================================================== 00:26:32.249 Total : 26531.75 103.64 0.00 0.00 38404.89 1500.77 114923.93 00:26:32.249 00:26:32.249 real 0m7.970s 00:26:32.249 user 0m15.271s 00:26:32.249 sys 0m0.297s 00:26:32.249 10:33:56 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.249 10:33:56 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:26:32.249 ************************************ 00:26:32.249 END TEST bdev_verify 00:26:32.249 ************************************ 00:26:32.249 10:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:32.249 10:33:57 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:32.508 10:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:26:32.508 10:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.508 10:33:57 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:32.508 ************************************ 00:26:32.508 START TEST bdev_verify_big_io 00:26:32.508 ************************************ 00:26:32.508 10:33:57 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:32.508 [2024-07-15 10:33:57.126133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:32.508 [2024-07-15 10:33:57.126175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938353 ] 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:32.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:32.508 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:32.508 [2024-07-15 10:33:57.215627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:32.508 [2024-07-15 10:33:57.285802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.508 [2024-07-15 10:33:57.285805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.766 [2024-07-15 10:33:57.306860] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:32.766 [2024-07-15 10:33:57.314886] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:32.766 [2024-07-15 10:33:57.322912] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:32.766 [2024-07-15 10:33:57.415938] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:35.299 [2024-07-15 10:33:59.562742] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:35.299 [2024-07-15 10:33:59.562803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:35.299 [2024-07-15 10:33:59.562813] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:35.299 [2024-07-15 10:33:59.570757] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:35.299 [2024-07-15 10:33:59.570771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:35.299 [2024-07-15 10:33:59.570779] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:35.299 [2024-07-15 10:33:59.578780] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:35.299 [2024-07-15 10:33:59.578792] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:35.299 [2024-07-15 10:33:59.578800] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:35.299 [2024-07-15 10:33:59.586801] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:35.299 [2024-07-15 10:33:59.586813] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:35.299 [2024-07-15 10:33:59.586821] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:35.299 Running I/O for 5 seconds... 00:26:36.676 [2024-07-15 10:34:01.259451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.260343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.261256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.262183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.263556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.264352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.265281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.266205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.267434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.268342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.269337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.270294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.272015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.272952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.273871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.274476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.275505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.276439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.277353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.277618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.279524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.280465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.281280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.281883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.283071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.284000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.284500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.284752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.286605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.287278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.288013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.288769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.289929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.290319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.290572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.290838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.292506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.292929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.293698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.294660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.295649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.295915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.296167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.297043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.298196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.299152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.300154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.301086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.301569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.301833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.302521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.303305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.304805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.305616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.306549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.307540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.308126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.308654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.309424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.310339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.311835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.312764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.312798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.313719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.314302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.314668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.314703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.315477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.316218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.316974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.317006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.317712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.318010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.319083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.319120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.319368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.320176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.321126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.321161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.322081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.676 [2024-07-15 10:34:01.322472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.323346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.323382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.324308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.325130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.325390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.325420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.326348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.326669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.327671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.327711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.328536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.329320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.330268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.330302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.330699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.331109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.331398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.331432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.332181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.333846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.333888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.334736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.334770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.335274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.335314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.335574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.335603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.337079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.337122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.337973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.338012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.338651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.338691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.339051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.339086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.340728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.340771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.341408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.341439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.342073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.342115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.343012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.343054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.344773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.344816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.345081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.345111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.345698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.345740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.346596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.346630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.348146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.348190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.348463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.348494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.349868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.349917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.351011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.351052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.352153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.352199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.352448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.352479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.353407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.353449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.353927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.353963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.354971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.355014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.355261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.355290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.356401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.356444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.356700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.356737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.357842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.357883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.358144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.358185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.358741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.358783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.359041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.359070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.360327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.360371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.360622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.360677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.361305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.361348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.361596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.361625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.362807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.362851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.363112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.363148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.363814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.363869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.677 [2024-07-15 10:34:01.364127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.364158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.365295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.365350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.365598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.365639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.366206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.366248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.366498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.366527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.367705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.367747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.368007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.368039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.368610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.368651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.368895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.368947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.370130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.370175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.370435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.370463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.371066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.371109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.371376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.371405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.372646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.372689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.372950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.372980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.373567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.373610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.373866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.373897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.375108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.375150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.375405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.375434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.376014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.376059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.376312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.376341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.377572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.377627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.377875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.377923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.378486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.378528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.378778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.378808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.380006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.380059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.380309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.380348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.380927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.380982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.381231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.381262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.382421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.382466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.382717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.382756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.382771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.383955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.384732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.384998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.385032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.385283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.385592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.385700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.385974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.386011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.386260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.386457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.387960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.388576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.388625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.388653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.388685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.388855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.388971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.389003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.389030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.389056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.389264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.678 [2024-07-15 10:34:01.390081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.390782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.391990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.392020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.392047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.679 [2024-07-15 10:34:01.392378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.470054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.470103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.470623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.470655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.472611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.472660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.473638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.473672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.474605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.474648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.475450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.475483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.476500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.476542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.476943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.476976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.478072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.478115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.479052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.479085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.480663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.480706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.481615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.481647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.482202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.482241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.482859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.482894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.484375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.484416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.485362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.486243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.487522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.487574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.487839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.488112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.489874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.489928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.490299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.491432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.491483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.492520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.940 [2024-07-15 10:34:01.493468] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.494131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.494910] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.495732] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.496022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.496439] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.497336] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.498276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.499333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.499951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.500713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.501502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.502413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.503414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.504324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.505218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.506485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.507286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.508091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.509020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.510020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.510814] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.511621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.512561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.514409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.515275] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.516210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.516996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.518033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.518828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.519772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.520134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.521997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.522984] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.523022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.523767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.940 [2024-07-15 10:34:01.524783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.525597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.525635] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.526575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.527512] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.528299] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.528337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.529136] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.529407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.529837] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.529879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.530650] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.531389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.531656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.531693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.531954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.532222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.533077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.533118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.534067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.534891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.535697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.535735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.536671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.537005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.537269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.537305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.537723] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.538478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.539362] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.539400] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.540246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.540552] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.541351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.541391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.542394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.543265] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.544067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.544105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.544892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.545175] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.545532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.545569] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.546340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.547073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.547337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.547374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.547627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.547896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.548790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.548827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.549669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.550492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.551453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.551501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.551755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.552116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.552749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.552788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.553493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.554364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.554882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.554926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.555179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.555495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.556325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.556370] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.557181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.557969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.558234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.558269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.558525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.558830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.559579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.559621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.559879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.560696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.560970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.561258] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.561297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.561575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.562346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.562907] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.562946] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.564007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.564956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.565002] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.565805] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.567074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.567916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.567956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.568217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.569914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.569985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.570506] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.571088] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.941 [2024-07-15 10:34:01.571583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.571628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.571882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.572546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.573407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.574122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.574918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.574956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.575334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.575609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.575867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.575913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.577090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.577359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.577399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.577654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.578266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.578537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.578578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.578831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.580033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.580081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.580334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.580592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.581147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.581195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.581449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.581706] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.582808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.583085] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.583349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.583395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.583782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.584051] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.584318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.584362] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.585627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.585919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.585960] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.586225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.586799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.587075] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.587121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.587382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.588645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.588693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.588961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.589216] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.589821] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.589867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.590134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.590389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.591360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.591636] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.591895] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.591940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.592291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.592561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.592816] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.592861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.594098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.594366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.594406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.594664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.595341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.595613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.595656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.595926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.597228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:26:36.942 [2024-07-15 10:34:01.597552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.597803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.597835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.598426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.598449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.598697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.598953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.598986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.599263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.600187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.600449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.600485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.600736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.601313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.601574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.601611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.601862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.602133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.602714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.602983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.603025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.603275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.603617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.603876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.603918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.604169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.604439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.605069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.605780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.605815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.606108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.942 [2024-07-15 10:34:01.606377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.607360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.607402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.607654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.607884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.608456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.609196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.609234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.609957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.610264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.610730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.610767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.611645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.611941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.612851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.613684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.613725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.613751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.614023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.614664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.614696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.614723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.614930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.615562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.615822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.615857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.616725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.616999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.617425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.617460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.618168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.618346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.619652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.620986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.621697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.621734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.621761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.621788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.622098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.622131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.622158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.622185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.622358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.623718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.624443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.624706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.624747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.625730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.626005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.626282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.626317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.627233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.627407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.628044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.629004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.629046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.629296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.629563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.630564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.630606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.631217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.631460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.632037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.632313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.632345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.633080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.633457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.634380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.634414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.634855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.635036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.635709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.636147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.636180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.637151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.943 [2024-07-15 10:34:01.637561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.638553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.638587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.639511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.639686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.640345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.641083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.641117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.641873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.642161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.642678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.642712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.643363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.643585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.644195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.644975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.645010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.645774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.646046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.646499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.646537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.647356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.647534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.648107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.648837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.648870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.649160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.649432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.649692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.649727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.650590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.650766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.651339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.651849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.651882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.652654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.652972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.653896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.653936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.654448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.654624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.655310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.655799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.655832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.656599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.656933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.657860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.657892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.658314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.658490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.659114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.660114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.660156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.660965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.661357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.662353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.662386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.662633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.662809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.663427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.663473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.664365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.664396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.664655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.664692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.665533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.665565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.665759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.666408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.666998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.667033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.944 [2024-07-15 10:34:01.667061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.667324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.667593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.667624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.667655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.667827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.669310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.669351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.669380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.669879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.670975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.671015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.671042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.671958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.672182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.672893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.672937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.673611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.673640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.673914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.673949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.674712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.674742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.674951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.677331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.678213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.678254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.678287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.678551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.679458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.679497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.679529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.679839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.681728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.681771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.681802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.682635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.683811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.683851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.683878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.684754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.684997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.685596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.685631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.686158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.686192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.686458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.686493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.687375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.687415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.687743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.689968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.690332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.690364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.690391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.690702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.691499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.691533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.691559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.691732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.693387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.693429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.693456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.694254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.695440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.695481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.695508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.696086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.696264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.698653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.698691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.698950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.698983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.699249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.699283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.700107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.700139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.700339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.702913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.703854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.703886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.703918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.704181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.704441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.704472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.704499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.704721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.707020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.707061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.707087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.707923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.708215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.708251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.708286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.709244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.709426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.710986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.711024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.945 [2024-07-15 10:34:01.711785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.711817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.712115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.712151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.713067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.713106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.713277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.716047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.716089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.717031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.717070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.718394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.718434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.718775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.718807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.718995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.722316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.722358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.723174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.723206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.724003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.724045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.724761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.724793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:36.946 [2024-07-15 10:34:01.724990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.727920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.727962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.728684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.728714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.729701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.729741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.730562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.730596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.730768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.732700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.732747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.733502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.734398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.735195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.735250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.736101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.737075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.737270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.739774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.739821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.740860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.741909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.741954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.742952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.743169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.745596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.746474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.746742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.747804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.748257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.749202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.750154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.751175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.751356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.206 [2024-07-15 10:34:01.754133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.754932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.755254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.756187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.757415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.758231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.759007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.759923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.760219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.763021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.763591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.764282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.764702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.765690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.766482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.767406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.767977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.768153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.769702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.770216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.771011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.771475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.772093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.772878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.773667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.774584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.774761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.776117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.777050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.777084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.777581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.778164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.778897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.778935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.779394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.779590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.782119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.782894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.782932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.783690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.783970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.784465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.784498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.785132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.785332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.787354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.788288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.788322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.788907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.789177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.790091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.790131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.791099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.791279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.792847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.793521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.793555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.794122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.794397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.795317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.795356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.796089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.796338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.799371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.800179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.800216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.801167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.801493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.802080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.802116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.802796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.802979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.804446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.805032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.805068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.805474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.805747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.806515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.806551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.806937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.807117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.808256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.809283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.809328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.809862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.810180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.810958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.810995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.811242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.811575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.813931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.814930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.814961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.815790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.816069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.816634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.816667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.817121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.817303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.819189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.207 [2024-07-15 10:34:01.819668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.819703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.820400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.820677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.821711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.821745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.822102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.822281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.823501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.823764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.823799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.824612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.824969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.825526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.825558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.826230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.826483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.828688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.829446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.829799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.829830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.830113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.830375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.831334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.831365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.831653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.834927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.835195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.835241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.835498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.836756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.837027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.837063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.837834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.838064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.840723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.840781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.841040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.841297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.842503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.842550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.842831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.843657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.843916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.845566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.846582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.846835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.846875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.847227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.847492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.848518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.848551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.848821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.851488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.851770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.851815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.852750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.853348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.853612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.853651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.854673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.855009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.857796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.857839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.858173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.859093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.859746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.859790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.860052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.860836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.861111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.861750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.862022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.862281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.862315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.862588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.862848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.863822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.863853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.864130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.866704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.866976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.867011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.867265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.868413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.868683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.868725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.869651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.869943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.872391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.872445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.872697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.872959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.874085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.874127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.874381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.875330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.875621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.877269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.208 [2024-07-15 10:34:01.878275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.878538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.878580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.878941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.879205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.880199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.880232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.880499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.883224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.883491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.883528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.884367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.884948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.885211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.885249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.886137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.886406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.889179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.889449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.890409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.890661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.891278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.892181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.892439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.893438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.893753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.895292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.896252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.896515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.896564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.896945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.897211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.898232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.898266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.898548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.901243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.901287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.901743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.901775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.902479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.902520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.903034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.903067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.903296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.905801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.905843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.906816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.906855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.908089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.908132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.908382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.908417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.908594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.911662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.911705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.912171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.912204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.912737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.912777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.913551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.913587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.913820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.916532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.916573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.917051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.917084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.917674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.917715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.918437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.918474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.918656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.920610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.920655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.920943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.920976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.921246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.921280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.921533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.921570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.921744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.923493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.923535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.923565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.923591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.924876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.924930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.924958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.924997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.925172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.926860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.926898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.926930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.926956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.927324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.927369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.927400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.927426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.927600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.209 [2024-07-15 10:34:01.929396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.929864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.930045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.932990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.933020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.933194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.935412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.935452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.936385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.936422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.936841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.936876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.937543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.937573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.937764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.940454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.940509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.941198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.941230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.942410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.942458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.942710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.942743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.942923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.946172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.946215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.947044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.947076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.948013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.948065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.948977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.949007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.949257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.952519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.952571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.953330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.953361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.954373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.954414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.955216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.955247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.955423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.957394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.957435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.958221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.958253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.959450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.959491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.959854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.959888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.960067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.963178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.963220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.964062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.964094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.965381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.965428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.966413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.966452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.966627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.969912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.969953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.970643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.970684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.971190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.971230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.971974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.972004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.210 [2024-07-15 10:34:01.972181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.975110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.975152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.975918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.975949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.977149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.977191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.977699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.977730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.977965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.980437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.980479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.981400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.981432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.982725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.982767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.983680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.983714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.983890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.986311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.986351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.987235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.987268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.988383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.988437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.989379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.989417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.211 [2024-07-15 10:34:01.989612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.992923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.992971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.993008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.993850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.995041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.995084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.995121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.996136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.996316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.998843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.998884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.999799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:01.999831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.000159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.000198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.000985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.001015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.001225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.003314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.004261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.004295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.004322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.004693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.005580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.005614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.005640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.005816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.008699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.008741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.008779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.009039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.010171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.010217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.010244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.011182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.011360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.013228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.013266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.013847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.013881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.014155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.014191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.014723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.014754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.014989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.016917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.017522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.017552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.017586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.017852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.018783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.018821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.018848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.019026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.021088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.021129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.021163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.022085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.023198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.023243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.023276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.024024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.024194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.026386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.026429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.027268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.027298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.027618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.027652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.028070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.028101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.028277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.030611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.031529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.031565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.031591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.031865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.032781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.032816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.032842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.033070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.036815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.036857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.036883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.474 [2024-07-15 10:34:02.037604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.038653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.038693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.038720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.039693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.039885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.041699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.041737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.042321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.042355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.042701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.042735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.043624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.043665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.043842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.045655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.045697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.045727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.045754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.046085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.046120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.046151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.046177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.046351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.047971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.048024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.048051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.048801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.050024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.050065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.050092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.050517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.050694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.053200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.053813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.053844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.054576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.054951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.055803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.055835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.056723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.056965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.059465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.060413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.060449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.061179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.061451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.061713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.061747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.062572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.062830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.065146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.065659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.065692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.066462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.066772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.067709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.067744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.068196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.068373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.069528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.070378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.070412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.071308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.071577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.072410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.072444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.073315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.073531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.076771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.077191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.077226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.077994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.079168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.079886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.079926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.080846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.081069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.082948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.083429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.084065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.084563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.084892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.085676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.086597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.087218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.087396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.090650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.091007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.091770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.092127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.093279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.094210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.095055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.095750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.095965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.098302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.099327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.099584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.475 [2024-07-15 10:34:02.100544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.101549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.101884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.102700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.103591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.103770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.105519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.106354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.107308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.107672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.108845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.109598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.109969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.110844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.111175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.113854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.114527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.114573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.115504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.116728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.117001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.117053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.117942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.118122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.119812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.120406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.120440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.121189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.121547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.122141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.122182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.122868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.123099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.125050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.125314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.125347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.126143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.126483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.127159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.127194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.127966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.128151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.129933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.130839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.130878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.131136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.131415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.132235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.132271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.132574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.132750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.135195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.135824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.135855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.136237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.136515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.136875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.136916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.137548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.137749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.139382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.139970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.140007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.140924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.141252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.142205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.142259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.143168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.143346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.144784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.145157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.145193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.145939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.146212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.146791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.146826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.147552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.147791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.150537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.150806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.150842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.151106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.151387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.151729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.151762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.152437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.152638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.154586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.155473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.155506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.155755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.156164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.156428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.156464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.157258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.157491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.159270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.160225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.160261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.476 [2024-07-15 10:34:02.160683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.160966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.161226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.161274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.161530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.161786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.164266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.164534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.164790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.164826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.165107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.165603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.166226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.166258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.166478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.169296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.169559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.169601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.169855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.171188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.171454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.171489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.172226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.172450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.173934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.173979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.174233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.174488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.175052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.175101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.175354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.175612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.175836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.177037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.177305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.177560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.177595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.177970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.178245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.178520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.178555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.178827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.180175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.180439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.180475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.180727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.181260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.181525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.181565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.181816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.182115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.183566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.183610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.183866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.184128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.184676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.184731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.184994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.185261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.185490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.186725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.187004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.187270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.187307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.187649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.187919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.188181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.188215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.188532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.190191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.190897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.190938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.191848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.192686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.192962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.193004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.193260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.193437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.196178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.196221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.196470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.197010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.198217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.198265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.199005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.199578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.199757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.202208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.203191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.203990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.204025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.204378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.205141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.205814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.205852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.206035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.207649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.208155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.208193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.477 [2024-07-15 10:34:02.209041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.209656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.209921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.209955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.210292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.210471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.213284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.213546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.213996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.214777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.215511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.216308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.216929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.217195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.217556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.219852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.220699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.221592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.221640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.221917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.222178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.222431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.222461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.222709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.225167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.225210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.226222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.226251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.226835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.226873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.227143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.227175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.227440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.230071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.230114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.230363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.230403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.231804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.231847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.232787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.232820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.233004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.236166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.236210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.236836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.236867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.237563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.237605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.238440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.238480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.238706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.242102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.242155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.243104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.243140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.243782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.243828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.244610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.244645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.244849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.247063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.247101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.247886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.247924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.248253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.248289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.248563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.248595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.248928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.251755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.251799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.251826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.251852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.252938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.252979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.253006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.253036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.253242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.255144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.255197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.478 [2024-07-15 10:34:02.255224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.255251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.255575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.255610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.255638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.255665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.255876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.256564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.256602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.256630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.256658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.256979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.257014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.257042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.257075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.257254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.257980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.479 [2024-07-15 10:34:02.258448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.258678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.259321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.259360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.260301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.260340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.260625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.260662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.261481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.261514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.261690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.263759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.263806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.264669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.264703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.265900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.265946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.266839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.266880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.267063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.268336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.268379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.268852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.268883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.269452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.269492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.270514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.270553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.270738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.271879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.271927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.272693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.272724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.273811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.273856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.274506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.274539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.274724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.275932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.275975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.276736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.276768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.277830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.277871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.278354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.278387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.278611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.280076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.280118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.280367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.280397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.281619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.281662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.282489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.282523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.282699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.284080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.284121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.284891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.284929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.285591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.285630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.285879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.285915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.286093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.287546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.287588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.288079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.288113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.289261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.289304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.290115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.290148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.290325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.292206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.292253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.293226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.293258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.294473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.294519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.295292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.295326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.295552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.296734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.296775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.297028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.297057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.298067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.298108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.298871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.298908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.742 [2024-07-15 10:34:02.299122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.300737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.300781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.300814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.301607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.302109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.302148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.302181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.302429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.302609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.303245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.303290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.304191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.304223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.304493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.304533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.305498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.305536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.305713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.306409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.306670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.306701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.306729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.307010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.308010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.308050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.308084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.308259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.309611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.309652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.309678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.310767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.311305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.311346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.311374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.311796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.311993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.314276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.314315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.315098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.315130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.315431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.315466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.316317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.316352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.316619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.318232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.319038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.319074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.319101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.319415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.320195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.320229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.320256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.320465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.321273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.321316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.321344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.321642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.322842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.322885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.322918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.323692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.323870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.324510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.324549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.325335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.325368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.325698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.325739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.326002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.326033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.326369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.327006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.327786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.327819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.327846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.328139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.328738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.328771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.328798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.329010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.330302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.330344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.330372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.330620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.331676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.331716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.331744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.332499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.332698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.333282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.333328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.334280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.334320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.334591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.334632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.335540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.335573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.335748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.743 [2024-07-15 10:34:02.336434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.336918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.337122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.338431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.338472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.338502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.339343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.340536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.340583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.340615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.341457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.341779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.342501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.343190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.343227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.343520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.343794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.344727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.344770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.345029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.345268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.345857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.346601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.346638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.347508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.347851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.348287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.348323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.349097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.349341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.350024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.350717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.350754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.351599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.351922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.352495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.352530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.353129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.353471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.354167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.354807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.354842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.355198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.355467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.356312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.356348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.356601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.356834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.358383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.359309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.359364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.360126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.361253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.361957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.361992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.362240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.362505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.363221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.363674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.364689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.365474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.365786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.366051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.366472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.367153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.367411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.369938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.370782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.371232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.371888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.372427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.372688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.373066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.373834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.374016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.377263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.377524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.377786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.378128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.379152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.379414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.380358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.381205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.381492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.382562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.383176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.383672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.383928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.384488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.384752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.385017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.744 [2024-07-15 10:34:02.385269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.385452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.386426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.386691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.386728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.386994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.387561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.387820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.387861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.388122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.388411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.389354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.389619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.389655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.389919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.390360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.390619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.390655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.390920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.391229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.392324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.392594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.392641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.392896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.393343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.393602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.393647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.393910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.394171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.395399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.395660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.395695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.395958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.396393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.396652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.396687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.396951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.397256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.398198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.398465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.398500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.398757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.399193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.399451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.399492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.399760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.400037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.400831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.401109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.401162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.401425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.401877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.402152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.402196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.402453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.402734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.403526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.403792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.403830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.404098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.404531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.404794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.404839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.405106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.405355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.406139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.406408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.406444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.406704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.407132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.407397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.407441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.407699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.407961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.408740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.409015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.409054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.409311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.409747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.410020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.410064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.410322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.410614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.411513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.412225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.412262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.412568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.412882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.413165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.413203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.413459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.413642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.415364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.416227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.416490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.416523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.416993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.418007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.418823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.418858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.745 [2024-07-15 10:34:02.419107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.420004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.420275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.420308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.420946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.422099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.422842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.422877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.423460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.423661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.425396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.425446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.426299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.426614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.427760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.427808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.428083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.428340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.428564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.430252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.431089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.431348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.431377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.431816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.432689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.433436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.433470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.433717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.434690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.434965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.434998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.435590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.436826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.437620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.437654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.438601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.438875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.439709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.439751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.440031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.441008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.442257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.442305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.442581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.442842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.443051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.443666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.444535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.445049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.445083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.445427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.446402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.446784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.446813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.447066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.448031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.448305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.448338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.449137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.449948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.450995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.451033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.451981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.452272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.453544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.453586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.454417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.455340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.456626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.456668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.457599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.458523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.458701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.459387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.460307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.746 [2024-07-15 10:34:02.461178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.461213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.461492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.462446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.463098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.463130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.463350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.464594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.464854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.464886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.465211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.466396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.467346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.467379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.468116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.468295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.469864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.470340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.470591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.470944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.472129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.473081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.473875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.474632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.474862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.475525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.476000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.476263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.476292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.476596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.477363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.478151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.478182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.478360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.479672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.479714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.480548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.480580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.481082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.481123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.481372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.481400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.481578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.483148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.483190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.483546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.483576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.484640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.484680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.485594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.485626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.485840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.487835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.487884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.488838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.488880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.489963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.490006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.491036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.491075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.491269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.492135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.492179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.492436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.492467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.493696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.493745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.494707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.494742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.494959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.495621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.495668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.496675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.496717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.497001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.497040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.497306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.497337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.497611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.499189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.499239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.499270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.499300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.500469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.500515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.500542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.500574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.500749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.501991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.502174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.502764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.502800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.502828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.747 [2024-07-15 10:34:02.502856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.503141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.503176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.503204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.503240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.503477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.504816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.505426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.505462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.506276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.506308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.506643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.506677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.507670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.507706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.507884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.509409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.509451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.509971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.510002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.510694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.510733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.511524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.511555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.511798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.513056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.513097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.513864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.513895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.515169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.515210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.515465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.515494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.515762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.517144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.517186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.518126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.518158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.519416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.519476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.520338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.520370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.520545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.521401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.521445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.522432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.522471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.523595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.523643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.524643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.524676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.524858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.526439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:37.748 [2024-07-15 10:34:02.526482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.526970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.527002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.527679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.527720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.528507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.528540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.528750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.530111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.530152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.531006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.531037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.532222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.532268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.532521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.532550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.532830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.534226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.534269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.535185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.535215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.536362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.536422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.537216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.537248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.537425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.538290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.538330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.539368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.539398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.540586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.540647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.541547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.541578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.541756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.543308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.543349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.543773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.543802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.544514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.544555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.545333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.545365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.545577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.546970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.547018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.547050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.548030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.549256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.549302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.549333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.549583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.549959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.550584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.550619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.551512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.551551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.551821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.551856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.552569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.552600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.552809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.553420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.553851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.553881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.553913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.554214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.554832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.554864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.554890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.555105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.556244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.556297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.556325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.557133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.558367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.558413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.558441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.559143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.009 [2024-07-15 10:34:02.559452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.560194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.560244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.561093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.561134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.561470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.561504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.562073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.562109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.562285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.562968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.563892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.563938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.563966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.564270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.564611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.564644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.564671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.564876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.565784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.565826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.565855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.566773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.567682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.567726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.567758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.568641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.568853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.569666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.569703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.570263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.570295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.570619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.570652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.571204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.571240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.571416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.572078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.572338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.572368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.572395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.572668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.573241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.573277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.573304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.573482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.574579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.574622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.574651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.575575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.576176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.576217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.576245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.576657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.576836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.577497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.577534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.578240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.578274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.578551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.578586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.578835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.578862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.579104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.579752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.579802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.579839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.579866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.580153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.580197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.580229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.580257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.580582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.581793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.581835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.581863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.582287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.583173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.583215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.583242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.583496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.583713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.584326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.585236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.585270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.585995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.586302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.587074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.587108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.587999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.588246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.589134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.010 [2024-07-15 10:34:02.589929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.589965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.590218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.590522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.590781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.590812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.591067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.591247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.591822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.592750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.592785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.593299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.593571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.594527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.594568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.595581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.595777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.596478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.596993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.597028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.597750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.598053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.598789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.598822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.599618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.599898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.601023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.601289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.601324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.601573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.602173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.602448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.602493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.602742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.603022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.603749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.604022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.604280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.604535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.604874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.605179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.605431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.605687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.605948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.606890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.607161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.607418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.607668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.608204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.608470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.608723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.608984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.609232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.610382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.610645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.610916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.611180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.611827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.612101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.612355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.612603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.612869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.613813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.614083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.614341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.614606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.615152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.615428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.615683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.615942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.616173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.617185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.617447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.617489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.617746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.618340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.618602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.618643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.618910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.619192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.620252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.620515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.620550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.620804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.621225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.621484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.621532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.621785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.622051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.623281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.623546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.623586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.623840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.624270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.624531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.624570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.625393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.625639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.626313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.626574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.011 [2024-07-15 10:34:02.626609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.626861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.627140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.628171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.628208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.628695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.628913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.629544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.629804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.629834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.630631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.630947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.631296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.631332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.632287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.632483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.633290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.633640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.633676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.634408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.634685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.635311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.635348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.636091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.636334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.637633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.638650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.638692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.639689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.640059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.640879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.640925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.641787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.642051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.642637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.643480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.643517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.643915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.644195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.644460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.644500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.644773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.644971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.645631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.646683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.646723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.647572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.647926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.648186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.648219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.648757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.648971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.649576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.650303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.650338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.651137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.651457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.651892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.651935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.652186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.652428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.653054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.653396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.654159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.654200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.654469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.654728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.654992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.655027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.655269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.656058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.656994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.657029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.657697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.658305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.659200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.659243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.659496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.659673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.660661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.660705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.661305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.661798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.662707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.662766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.663027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.663280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.663460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.664078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.664340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.664596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.664628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.664899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.665853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.666166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.666200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.666393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.667208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.668023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.668058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.012 [2024-07-15 10:34:02.668906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.670004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.670496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.670530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.671296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.671481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.672295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.672339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.672608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.673513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.674591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.674632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.675413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.675912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.676111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.676697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.677689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.677953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.677988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.678404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.679411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.680369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.680407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.680584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.681996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.682785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.682817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.683605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.684107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.684366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.684401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.685259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.685437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.686524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.686565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.687343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.688138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.689195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.689235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.689483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.689782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.689965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.690566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.691501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.692108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.692140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.692443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.693243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.694035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.694067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.694273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.695755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.696567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.696602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.697383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.698413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.699232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.699267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.700089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.700297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.701341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.702138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.702960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.703824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.704751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.705552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.706362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.707156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.707368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.708491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.709341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.710216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.710249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.710519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.711345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.712257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.712296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.712472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.713621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.013 [2024-07-15 10:34:02.713667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.713928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.713962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.714986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.715027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.715804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.715834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.716048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.717449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.717496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.718469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.718507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.719016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.719060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.719314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.719347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.719523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.720884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.720934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.721488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.721519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.722728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.722772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.723627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.723659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.723835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.725147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.725191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.726229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.726268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.727081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.727123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.727912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.727944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.728150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.728713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.728749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.729022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.729063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.729458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.729492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.730275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.730306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.730484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.731562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.731604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.731632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.731659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.732738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.732778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.732806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.732842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.733027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.733701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.733737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.733765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.733792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.734071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.734111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.734139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.734171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.734349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.734964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.735606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.736843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.737071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.737630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.737671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.738518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.738551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.738849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.738916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.739785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.739820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.740024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.740846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.740893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.741165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.741204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.742364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.742413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.743427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.743465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.743657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.745110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.745155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.014 [2024-07-15 10:34:02.745951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.745984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.746552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.746596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.747222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.747255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.747468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.748885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.748933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.749978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.750016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.751100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.751143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.752084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.752121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.752358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.753676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.753718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.754523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.754555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.755499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.755543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.756389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.756421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.756597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.757430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.757474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.757731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.757760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.758768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.758809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.759621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.759654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.759862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.761413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.761474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.762351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.762383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.762872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.762923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.763172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.763205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.763381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.764731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.764773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.765302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.765336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.766423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.766480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.767293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.767325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.767568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.769596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.769647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.770404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.770439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.771662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.771709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.772719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.772754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.773039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.774328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.774387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.775187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.775230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.776131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.776173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.776601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.776630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.776848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.778311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.778359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.778392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.778988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.780171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.780212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.780243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.780491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.780774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.781502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.781540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.781910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.781947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.782239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.782283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.783219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.783250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.783542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.784255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.785063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.785098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.785127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.785466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.786462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.786498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.786529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.786707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.787785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.787828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.787855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.015 [2024-07-15 10:34:02.788418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.789415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.789457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.789485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.790199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.790423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.791551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.791593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.792428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.792468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.792797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.792838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.793484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.793516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.793784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.794463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.794742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.794776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.794806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.016 [2024-07-15 10:34:02.795156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.795508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.795549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.795580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.795781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.796838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.796882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.796918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.797707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.798816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.798863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.798891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.799784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.800010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.800727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.800774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.801053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.801088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.801400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.801439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.280 [2024-07-15 10:34:02.802360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.802394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.802570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.803231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.804853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.805090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.806647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.806690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.806719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.807709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.808780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.808822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.808850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.809796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.810029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.810648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.810696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.811671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.811702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.811996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.812043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.812306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.812339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.812514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.813938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.814857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.814908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.814937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.815188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.815688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.815735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.815781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.816047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.816257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.816977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.817243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.817278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.817529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.817898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.818171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.818211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.818463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.818792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.819808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.820085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.820121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.820375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.820804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.821091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.821128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.821377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.821668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.822434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.822696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.822732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.822995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.823358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.823618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.823652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.823914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.824184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.824767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.825046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.825085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.825340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.825656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.825924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.825963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.826253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.826504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.827723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.827995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.828032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.828287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.828967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.829237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.829273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.281 [2024-07-15 10:34:02.829531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.829817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.830554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.830827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.831107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.831363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.831711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.831980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.832237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.832494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.832805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.833823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.834119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.834384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.834644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.835214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.835477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.835733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.835996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.836231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.837244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.837509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.837762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.838045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.838631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.839147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.839878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.840627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.840838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.841702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.841976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.842636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.843392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.844397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.845137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.845620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.845878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.846189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.847297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.848291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.848333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.849343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.849950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.850265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.850300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.851046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.851224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.851864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.852572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.852617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.852879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.853352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.854278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.854322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.854581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.854765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.855680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.856028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.856063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.856714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.856992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.857574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.857605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.858223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.858407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.859132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.860092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.860135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.860922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.861197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.861625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.861657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.862230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.862409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.863058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.863328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.863363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.864121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.864435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.864823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.864863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.865883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.866069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.866847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.867323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.867358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.867880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.868220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.868744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.868780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.869047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.869318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.869969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.870752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.870784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.871175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.871448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.872465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.872515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.282 [2024-07-15 10:34:02.873438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.873719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.874365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.874633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.874678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.874943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.875236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.875670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.875703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.876492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.876670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.877292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.878096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.878130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.879045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.879398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.879656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.879689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.879948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.880218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.880850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.881295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.881330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.882139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.882416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.883084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.883120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.883914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.884214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.885127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.885857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.886695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.886728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.887005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.887593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.888589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.888627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.888806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.889642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.889912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.889948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.890540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.891610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.892556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.892592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.893059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.893237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.894763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.894804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.895068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.895322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.896340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.896382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.897184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.898113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.898352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.898983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.899794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.900737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.900771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.901106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.901367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.902002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.902035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.902247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.903411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.904388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.904422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.905340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.906585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.906850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.906885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.907165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.907349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.908971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.909014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.909370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.910208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.911449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.911495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.912313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.912570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.912846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.913501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.914299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.915211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.915244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.915580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.916359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.917145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.917181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.917358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.918304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.919294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.919335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.920336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.283 [2024-07-15 10:34:02.921554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.922091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.922127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.922887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.923098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.923946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.923995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.924246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.925152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.926311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.926362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.927234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.927914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.928123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.928751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.929230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.929487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.929521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.929886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.930663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.931468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.931501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.931676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.933039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.933817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.933853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.934775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.935347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.935647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.935681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.936456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.936634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.937892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.938683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.939467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.940387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.940986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.941251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.942134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.943072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.943256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.943936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.944710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.945497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.945529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.945799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.946155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.946412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.946446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.946666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.948187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.948230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.948914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.948945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.950012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.950059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.950987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.951027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.951204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.952446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.952489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.953268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.953301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.954506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.954549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.954963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.954996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.955215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.956547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.956594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.956850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.956891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.958251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.958305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.959219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.959255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.959432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.960789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.960833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.961624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.961656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.962195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.962237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.962486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.962519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.962698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.963334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.963372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.964307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.964340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.964710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.964743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.965523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.965555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.965782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.966622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.966682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.966722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.966749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.284 [2024-07-15 10:34:02.967875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.967925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.967970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.968007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.968218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.968826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.968866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.968895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.968929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.969203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.969236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.969263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.969290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.969499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.970813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.971971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.972003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.972179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.972794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.972837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.973647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.973677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.974071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.974121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.974375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.974407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.974585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.976153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.976202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.976856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.976886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.978013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.978056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.978970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.979008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.979250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.980928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.980979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.981924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.981957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.982979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.983021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.983614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.983654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.983979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.985617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.985660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.985966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.986015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.987174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.987225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.987479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.285 [2024-07-15 10:34:02.987513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.987764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.989194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.989236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.990010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.990053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.990722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.990767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.991036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.991075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.991302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.992455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.992502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.993281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.993319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.993898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.993951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.994221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.994258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.994444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.995919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.995964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.996507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.996562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.997242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.997289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.998163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.998204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:02.998387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.000089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.000139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.000408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.000443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.001610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.001656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.002282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.002316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.002560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.003593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.003636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.003887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.003927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.005148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.005201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.005462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.005498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.005680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.006926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.006985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.007769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.007799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.008588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.008631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.009558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.009591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.009768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.010748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.010792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.010825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.286 [2024-07-15 10:34:03.011165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.012201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.012241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.012271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.012845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.013031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.013956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.014003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.014430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.014463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.014787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.014822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.015601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.015633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.015811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.016482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.017450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.017492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.017522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.017794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.018528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.018561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.018602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.018954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.020528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.020569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.020608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.021307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.021914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.021979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.022008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.022375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.022557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.023205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.023242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.023923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.023955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.024223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.024262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.025267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.025303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.025480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.026186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.026454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.026490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.026522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.026831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.027579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.027616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.027643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.027918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.028958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.029002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.029029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.029281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.029808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.029853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.029893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.030180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.030435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.031250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.031297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.031555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.031588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.031911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.031948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.032201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.032234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.032451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.033190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.033451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.033492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.033520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.033851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.034648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.034682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.034713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.035071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.036176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.036235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.036263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.036538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.037051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.037105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.037133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.037847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.038095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.038917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.038954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.039691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.039721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.040053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.040088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.040454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.287 [2024-07-15 10:34:03.040485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.040748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.041516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.041552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.041582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.041615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.042022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.042057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.042086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.042113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.042323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.043344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.043391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.043418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.043965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.044669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.044712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.044741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.045009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.045229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.045925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.046863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.046909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.047182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.047541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.047810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.047844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.048828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.049160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.050035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.050309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.050352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.050649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.050957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.051458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.051490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.052052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.052245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.053080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.053741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.053772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.054455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.054780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.055075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.055112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.055371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.055594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.056240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.056502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.056532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.056783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.057136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.058045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.058086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.058348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.058525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.059762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.060779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.060835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.061174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.061679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.061958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.062004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.062266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.062448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.063363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.288 [2024-07-15 10:34:03.063632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.550 [2024-07-15 10:34:03.063898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.550 [2024-07-15 10:34:03.064169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.550 [2024-07-15 10:34:03.064449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.550 [2024-07-15 10:34:03.064923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.550 [2024-07-15 10:34:03.065591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.550 [2024-07-15 10:34:03.065852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.066044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.067285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.067995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.068620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.069153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.070031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.070912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.071594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.072214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.072397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.074095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.074771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.075206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.076188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.076869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.077138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.077416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.078148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.078327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.080006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.080278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.081123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.081717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.082818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.083457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.083927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.084184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.084497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.085618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.086573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.086614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.087589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.088218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.088675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.088709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.089428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.089609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.090328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.090825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.090857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.091113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.091446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.092097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.092133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.093064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.093245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.093920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.094837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.094870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.095190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.095519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.095874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.095916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.096584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.096821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.097582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.097853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.097884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.098564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.098872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.099844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.099877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.100158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.100341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.100995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.101256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.101287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.101536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.101878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.102147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.102183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.102434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.102744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.103464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.104016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.104059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.104798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.105076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.105335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.105365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.105612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.105792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.106516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.106956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.106992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.107241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.107672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.108267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.108304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.108703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.108970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.109914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.110334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.110366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.110966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.111241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.111498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.551 [2024-07-15 10:34:03.111540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.111791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.112020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.112717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.113526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.113559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.114294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.114742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.115008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.115042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.115813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.115996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.116612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.117391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.117424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.118220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.118531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.119360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.119394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.119647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.119864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.120543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.121339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.122119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.122152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.122456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.123239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.124059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.124091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.124268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.125229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.126280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.126321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.127298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.128377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.128928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.128963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.129726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.129937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.130785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.130837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.131098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.131980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.133107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.133153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.134147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.134803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.135024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.135657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.136246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.136499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.136528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.136820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.137604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.138401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.138433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.138647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.140117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.140997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.141033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.141961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.142529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.143220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.143255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.144028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.144239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.145821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.145865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.146721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.147636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.148141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.148181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.148430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.149340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.149570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.150313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.552 [2024-07-15 10:34:03.150817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.151606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.151638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.151949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.152747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.153440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.153471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.153751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.155291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.156182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.156217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.157021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.158052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.158833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.158865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.159669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.159881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.161277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.161318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.162110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.162892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.164175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.164217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.165139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.166090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.166269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.167024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.167990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.168880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.168923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.169201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.170161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.170779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.170811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.171023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.172332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.172594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.172624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.173032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.174119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.174974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.175009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.175774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.175959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.177446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.177877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.178162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.178649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.179723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.180531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.181342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.182214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.182421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.183139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.183525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.183776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.183804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.184084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.184861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.185640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.185671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.185926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.187513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.187562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.188482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.188517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.189019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.189058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.189305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.189333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.189509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.190970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.191036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.191732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.191762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.192808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.192849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.193638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.193670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.193878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.195845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.195887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.196819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.196853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.198108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.198152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.198946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.198977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.199212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.200373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.200413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.200662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.200690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.553 [2024-07-15 10:34:03.201698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.201739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.202525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.202557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.202764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.205062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.205104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.205892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.205936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.206374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.206411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.206809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.206840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.207085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.209614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.209654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.209697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.209723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.210252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.210298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.210326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.210353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.210581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.212833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.213080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.215975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.217897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.218079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.219302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.219340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.220035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.220070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.220345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.220385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.221157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.221190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.221480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.224057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.224100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.224659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.224690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.225548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.225592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.226291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.226322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.226615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.228810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.228868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.229581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.229618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.230174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.230215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.230493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.230526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.230707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.232089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.232147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.232798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.232831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.233593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.233634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.234216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.234250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.234449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.236674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.236719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.237569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.237600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.238101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.238141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.238390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.238418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.554 [2024-07-15 10:34:03.238639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.241783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.241826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.242684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.242723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.243292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.243336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.244032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.244069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.244285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.247513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.247567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.247849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.247887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.248631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.248677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.249498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.249532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.249745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.252131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.252173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.252966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.252998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.254073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.254115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.254795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.254825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.255025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.257472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.257512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.257763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.257805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.258972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.259027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.259281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.259321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.259500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.262026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.262079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.262613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.262644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.263233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.263274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.263530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.263558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.263817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.264742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.264784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.264816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.265075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.265655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.265695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.265730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.265985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.266250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.267002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.267040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.267291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.267331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.267763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.267806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.268104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.268138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.268378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.268717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.268990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.269032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.269062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.269415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.555 [2024-07-15 10:34:03.269674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.269704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.269732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.270021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.270983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.271025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.271054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.271306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.271836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.271876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.271910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.272169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.272441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.273182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.273220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.273469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.273498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.273869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.273910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.274161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.274194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.274533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.275428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.275699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.275734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.275770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.276227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.276489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.276523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.276563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.276822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.277860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.277920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.277949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.278205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.278814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.278854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.278882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.279159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.279460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.280274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.280313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.280565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.280599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.281021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.281067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.281325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.281359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.281588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.282322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.282585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.282620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.282648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.282975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.283233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.283275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.283304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.283540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.284715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.284757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.284785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.285366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.286383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.286425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.286459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.287227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.287477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.288476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.288514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.289227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.289263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.289533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.289567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.290350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.290382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.290646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.291791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.292049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.293182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.293224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.293252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.293679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.294290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.294332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.294363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.294986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.295178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.295794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.556 [2024-07-15 10:34:03.296594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.296630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.296878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.297308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.298309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.298349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.299188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.299429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.300050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.300312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.300342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.300589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.300894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.301611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.301645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.302306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.302537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.303200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.303459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.303489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.303740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.304017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.304977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.305014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.305429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.305612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.306261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.306524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.306553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.307509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.307798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.308601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.308637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.309058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.309239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.310079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.310339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.310371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.310737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.311232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.312090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.312123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.312372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.312719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.313338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.313600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.314621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.315578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.315981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.316238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.316491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.316747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.316964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.318409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.319214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.320029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.320510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.321588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.322388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.322642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.322892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:38.557 [2024-07-15 10:34:03.323113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:26:40.467 00:26:40.467 Latency(us) 00:26:40.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.467 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x0 length 0x100 00:26:40.467 crypto_ram : 5.51 68.91 4.31 0.00 0.00 1818489.34 39845.89 1624034.51 00:26:40.467 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x100 length 0x100 00:26:40.467 crypto_ram : 5.49 69.98 4.37 0.00 0.00 1789761.84 5898.24 1543503.87 00:26:40.467 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x0 length 0x100 00:26:40.467 crypto_ram2 : 5.52 69.42 4.34 0.00 0.00 1767074.20 5426.38 1624034.51 00:26:40.467 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x100 length 0x100 00:26:40.467 crypto_ram2 : 5.49 69.98 4.37 0.00 0.00 1750161.72 5111.81 1523371.21 00:26:40.467 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x0 length 0x100 00:26:40.467 crypto_ram3 : 5.36 471.63 29.48 0.00 0.00 253044.06 33764.15 352321.54 00:26:40.467 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x100 length 0x100 00:26:40.467 crypto_ram3 : 5.36 491.83 30.74 0.00 0.00 243001.42 23697.82 395942.30 00:26:40.467 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x0 length 0x100 00:26:40.467 crypto_ram4 : 5.41 485.25 30.33 0.00 0.00 241694.77 11377.05 308700.77 00:26:40.467 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:40.467 Verification LBA range: start 0x100 length 0x100 00:26:40.467 crypto_ram4 : 5.40 505.79 31.61 0.00 0.00 232214.13 3303.01 397620.02 00:26:40.467 =================================================================================================================== 00:26:40.467 Total : 2232.79 139.55 0.00 0.00 437775.99 3303.01 1624034.51 00:26:41.034 00:26:41.034 real 0m8.436s 00:26:41.034 user 0m16.169s 00:26:41.034 sys 0m0.336s 00:26:41.034 10:34:05 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:41.034 10:34:05 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:26:41.034 ************************************ 00:26:41.034 END TEST bdev_verify_big_io 00:26:41.034 ************************************ 00:26:41.034 10:34:05 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:41.034 10:34:05 blockdev_crypto_aesni -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:41.034 10:34:05 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:41.034 10:34:05 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.034 10:34:05 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:41.034 ************************************ 00:26:41.034 START TEST bdev_write_zeroes 00:26:41.034 ************************************ 00:26:41.034 10:34:05 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:41.034 [2024-07-15 10:34:05.644584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:41.034 [2024-07-15 10:34:05.644624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939692 ] 00:26:41.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.034 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:41.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.034 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:41.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.034 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:41.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.034 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:41.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:41.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:41.035 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:41.035 [2024-07-15 10:34:05.733728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.035 [2024-07-15 10:34:05.804633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.293 [2024-07-15 10:34:05.825619] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:26:41.293 [2024-07-15 10:34:05.833546] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:26:41.293 [2024-07-15 10:34:05.841564] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:26:41.293 [2024-07-15 10:34:05.942448] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:26:43.824 [2024-07-15 10:34:08.092307] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:26:43.824 [2024-07-15 10:34:08.092366] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:43.824 [2024-07-15 10:34:08.092377] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:43.824 [2024-07-15 10:34:08.100325] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:26:43.824 [2024-07-15 10:34:08.100337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:43.824 [2024-07-15 10:34:08.100344] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:43.824 [2024-07-15 10:34:08.108345] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:26:43.824 [2024-07-15 10:34:08.108356] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:26:43.824 [2024-07-15 10:34:08.108363] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:43.824 [2024-07-15 10:34:08.116366] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:26:43.824 [2024-07-15 10:34:08.116377] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:26:43.824 [2024-07-15 10:34:08.116384] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:43.824 Running I/O for 1 seconds... 00:26:44.759 00:26:44.759 Latency(us) 00:26:44.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.759 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:44.759 crypto_ram : 1.02 3148.22 12.30 0.00 0.00 40458.76 3329.23 47185.92 00:26:44.759 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:44.759 crypto_ram2 : 1.02 3161.58 12.35 0.00 0.00 40177.80 3276.80 44040.19 00:26:44.759 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:44.759 crypto_ram3 : 1.01 24574.30 95.99 0.00 0.00 5158.89 1507.33 6815.74 00:26:44.759 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:44.759 crypto_ram4 : 1.01 24611.13 96.14 0.00 0.00 5142.14 1500.77 5609.88 00:26:44.759 =================================================================================================================== 00:26:44.759 Total : 55495.22 216.78 0.00 0.00 9160.10 1500.77 47185.92 00:26:44.759 00:26:44.759 real 0m3.910s 00:26:44.759 user 0m3.576s 00:26:44.759 sys 0m0.287s 00:26:44.759 10:34:09 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:44.759 10:34:09 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:26:44.759 ************************************ 00:26:44.759 END TEST bdev_write_zeroes 00:26:44.759 ************************************ 00:26:45.018 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 0 00:26:45.018 10:34:09 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:45.018 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:45.018 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.018 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:45.018 ************************************ 00:26:45.018 START TEST bdev_json_nonenclosed 00:26:45.018 ************************************ 00:26:45.018 10:34:09 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:45.018 [2024-07-15 10:34:09.649325] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:45.018 [2024-07-15 10:34:09.649366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940481 ] 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:45.018 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.018 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:45.018 [2024-07-15 10:34:09.737156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.018 [2024-07-15 10:34:09.806180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.018 [2024-07-15 10:34:09.806246] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:45.018 [2024-07-15 10:34:09.806261] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:45.018 [2024-07-15 10:34:09.806285] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:45.276 00:26:45.276 real 0m0.283s 00:26:45.276 user 0m0.163s 00:26:45.276 sys 0m0.119s 00:26:45.276 10:34:09 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:26:45.276 10:34:09 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.276 10:34:09 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:26:45.276 ************************************ 00:26:45.276 END TEST bdev_json_nonenclosed 00:26:45.276 ************************************ 00:26:45.276 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:26:45.276 10:34:09 blockdev_crypto_aesni -- bdev/blockdev.sh@782 -- # true 00:26:45.276 10:34:09 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:45.276 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:26:45.276 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.276 10:34:09 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:45.276 ************************************ 00:26:45.276 START TEST bdev_json_nonarray 00:26:45.276 ************************************ 00:26:45.276 10:34:09 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:45.276 [2024-07-15 10:34:10.016505] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:45.276 [2024-07-15 10:34:10.016549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940518 ] 00:26:45.276 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.276 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:45.276 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.276 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:45.276 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.276 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:45.276 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.276 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:45.276 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.276 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:45.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:45.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:45.544 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:45.544 [2024-07-15 10:34:10.107925] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.544 [2024-07-15 10:34:10.178672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.544 [2024-07-15 10:34:10.178740] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:45.544 [2024-07-15 10:34:10.178755] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:26:45.544 [2024-07-15 10:34:10.178763] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:45.544 00:26:45.544 real 0m0.290s 00:26:45.544 user 0m0.168s 00:26:45.544 sys 0m0.120s 00:26:45.544 10:34:10 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:26:45.544 10:34:10 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.544 10:34:10 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:26:45.544 ************************************ 00:26:45.544 END TEST bdev_json_nonarray 00:26:45.544 ************************************ 00:26:45.544 10:34:10 blockdev_crypto_aesni -- common/autotest_common.sh@1142 -- # return 234 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@785 -- # true 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@787 -- # [[ crypto_aesni == bdev ]] 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@794 -- # [[ crypto_aesni == gpt ]] 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@798 -- # [[ crypto_aesni == crypto_sw ]] 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@811 -- # cleanup 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:26:45.544 10:34:10 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:26:45.544 00:26:45.544 real 1m7.092s 00:26:45.544 user 2m43.623s 00:26:45.544 sys 0m7.314s 00:26:45.544 10:34:10 blockdev_crypto_aesni -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.544 10:34:10 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:26:45.544 ************************************ 00:26:45.544 END TEST blockdev_crypto_aesni 00:26:45.544 ************************************ 00:26:45.802 10:34:10 -- common/autotest_common.sh@1142 -- # return 0 00:26:45.802 10:34:10 -- spdk/autotest.sh@358 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:45.802 10:34:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:45.802 10:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.802 10:34:10 -- common/autotest_common.sh@10 -- # set +x 00:26:45.802 ************************************ 00:26:45.802 START TEST blockdev_crypto_sw 00:26:45.802 ************************************ 00:26:45.802 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:26:45.802 * Looking for test storage... 00:26:45.802 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:26:45.802 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # uname -s 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # test_type=crypto_sw 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # crypto_device= 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # dek= 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # env_ctx= 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == bdev ]] 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # [[ crypto_sw == crypto_* ]] 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1940584 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:26:45.803 10:34:10 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 1940584 00:26:45.803 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@829 -- # '[' -z 1940584 ']' 00:26:45.803 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.803 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.803 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.803 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.803 10:34:10 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:45.803 [2024-07-15 10:34:10.580381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:45.803 [2024-07-15 10:34:10.580429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940584 ] 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:46.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:46.060 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:46.060 [2024-07-15 10:34:10.674283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.060 [2024-07-15 10:34:10.747458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.625 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.625 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@862 -- # return 0 00:26:46.625 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:26:46.625 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@711 -- # setup_crypto_sw_conf 00:26:46.625 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@193 -- # rpc_cmd 00:26:46.625 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.625 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:46.882 Malloc0 00:26:46.882 Malloc1 00:26:46.882 true 00:26:46.882 true 00:26:46.882 true 00:26:46.882 [2024-07-15 10:34:11.611482] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:46.882 crypto_ram 00:26:46.882 [2024-07-15 10:34:11.619509] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:46.882 crypto_ram2 00:26:46.882 [2024-07-15 10:34:11.627530] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:46.882 crypto_ram3 00:26:46.882 [ 00:26:46.882 { 00:26:46.882 "name": "Malloc1", 00:26:46.882 "aliases": [ 00:26:46.882 "1a4df2d2-a4d0-4693-86d1-b203d59f3022" 00:26:46.882 ], 00:26:46.882 "product_name": "Malloc disk", 00:26:46.882 "block_size": 4096, 00:26:46.882 "num_blocks": 4096, 00:26:46.882 "uuid": "1a4df2d2-a4d0-4693-86d1-b203d59f3022", 00:26:46.882 "assigned_rate_limits": { 00:26:46.882 "rw_ios_per_sec": 0, 00:26:46.882 "rw_mbytes_per_sec": 0, 00:26:46.882 "r_mbytes_per_sec": 0, 00:26:46.882 "w_mbytes_per_sec": 0 00:26:46.882 }, 00:26:46.882 "claimed": true, 00:26:46.882 "claim_type": "exclusive_write", 00:26:46.882 "zoned": false, 00:26:46.882 "supported_io_types": { 00:26:46.882 "read": true, 00:26:46.882 "write": true, 00:26:46.882 "unmap": true, 00:26:46.882 "flush": true, 00:26:46.882 "reset": true, 00:26:46.882 "nvme_admin": false, 00:26:46.882 "nvme_io": false, 00:26:46.882 "nvme_io_md": false, 00:26:46.882 "write_zeroes": true, 00:26:46.882 "zcopy": true, 00:26:46.882 "get_zone_info": false, 00:26:46.882 "zone_management": false, 00:26:46.882 "zone_append": false, 00:26:46.882 "compare": false, 00:26:46.882 "compare_and_write": false, 00:26:46.882 "abort": true, 00:26:46.882 "seek_hole": false, 00:26:46.882 "seek_data": false, 00:26:46.882 "copy": true, 00:26:46.882 "nvme_iov_md": false 00:26:46.882 }, 00:26:46.882 "memory_domains": [ 00:26:46.882 { 00:26:46.882 "dma_device_id": "system", 00:26:46.882 "dma_device_type": 1 00:26:46.882 }, 00:26:46.882 { 00:26:46.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.882 "dma_device_type": 2 00:26:46.882 } 00:26:46.882 ], 00:26:46.882 "driver_specific": {} 00:26:46.882 } 00:26:46.882 ] 00:26:46.882 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.882 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:26:46.882 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.882 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:46.882 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.882 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # cat 00:26:46.882 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:26:46.882 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.882 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # jq -r .name 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "56ad1996-b8e7-5c1d-843f-a507a519823e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "56ad1996-b8e7-5c1d-843f-a507a519823e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ce82ccf2-eeb3-54d8-aa5b-1e785bdc9ff9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ce82ccf2-eeb3-54d8-aa5b-1e785bdc9ff9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:26:47.140 10:34:11 blockdev_crypto_sw -- bdev/blockdev.sh@754 -- # killprocess 1940584 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@948 -- # '[' -z 1940584 ']' 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@952 -- # kill -0 1940584 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # uname 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940584 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.140 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.141 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940584' 00:26:47.141 killing process with pid 1940584 00:26:47.141 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@967 -- # kill 1940584 00:26:47.141 10:34:11 blockdev_crypto_sw -- common/autotest_common.sh@972 -- # wait 1940584 00:26:47.707 10:34:12 blockdev_crypto_sw -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:47.707 10:34:12 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:26:47.707 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:26:47.707 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.707 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:47.707 ************************************ 00:26:47.707 START TEST bdev_hello_world 00:26:47.707 ************************************ 00:26:47.708 10:34:12 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:26:47.708 [2024-07-15 10:34:12.278680] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:47.708 [2024-07-15 10:34:12.278733] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1940874 ] 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:47.708 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:47.708 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:47.708 [2024-07-15 10:34:12.370675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.708 [2024-07-15 10:34:12.441316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.966 [2024-07-15 10:34:12.598362] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:47.966 [2024-07-15 10:34:12.598420] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:47.966 [2024-07-15 10:34:12.598446] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:47.966 [2024-07-15 10:34:12.606383] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:47.966 [2024-07-15 10:34:12.606400] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:47.966 [2024-07-15 10:34:12.606408] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:47.966 [2024-07-15 10:34:12.614401] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:47.966 [2024-07-15 10:34:12.614414] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:47.966 [2024-07-15 10:34:12.614422] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:47.966 [2024-07-15 10:34:12.652649] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:47.966 [2024-07-15 10:34:12.652674] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:26:47.966 [2024-07-15 10:34:12.652703] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:47.966 [2024-07-15 10:34:12.653697] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:47.966 [2024-07-15 10:34:12.653754] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:47.966 [2024-07-15 10:34:12.653770] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:47.966 [2024-07-15 10:34:12.653794] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:47.966 00:26:47.966 [2024-07-15 10:34:12.653806] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:48.224 00:26:48.224 real 0m0.600s 00:26:48.224 user 0m0.396s 00:26:48.224 sys 0m0.189s 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:48.224 ************************************ 00:26:48.224 END TEST bdev_hello_world 00:26:48.224 ************************************ 00:26:48.224 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:48.224 10:34:12 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:26:48.224 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:48.224 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.224 10:34:12 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:48.224 ************************************ 00:26:48.224 START TEST bdev_bounds 00:26:48.224 ************************************ 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1941146 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1941146' 00:26:48.224 Process bdevio pid: 1941146 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1941146 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1941146 ']' 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.224 10:34:12 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:48.224 [2024-07-15 10:34:12.960873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:48.224 [2024-07-15 10:34:12.960926] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1941146 ] 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.224 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:48.224 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:48.225 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.225 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:48.482 [2024-07-15 10:34:13.052014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:48.482 [2024-07-15 10:34:13.127945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.482 [2024-07-15 10:34:13.128042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.482 [2024-07-15 10:34:13.128045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.740 [2024-07-15 10:34:13.279934] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:48.740 [2024-07-15 10:34:13.279985] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:48.740 [2024-07-15 10:34:13.280010] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.740 [2024-07-15 10:34:13.287937] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:48.740 [2024-07-15 10:34:13.287949] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:48.740 [2024-07-15 10:34:13.287956] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.740 [2024-07-15 10:34:13.295961] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:48.740 [2024-07-15 10:34:13.295973] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:48.740 [2024-07-15 10:34:13.295980] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:48.998 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.998 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:26:48.998 10:34:13 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:49.257 I/O targets: 00:26:49.257 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:26:49.257 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:26:49.257 00:26:49.257 00:26:49.257 CUnit - A unit testing framework for C - Version 2.1-3 00:26:49.257 http://cunit.sourceforge.net/ 00:26:49.257 00:26:49.257 00:26:49.257 Suite: bdevio tests on: crypto_ram3 00:26:49.257 Test: blockdev write read block ...passed 00:26:49.257 Test: blockdev write zeroes read block ...passed 00:26:49.257 Test: blockdev write zeroes read no split ...passed 00:26:49.257 Test: blockdev write zeroes read split ...passed 00:26:49.257 Test: blockdev write zeroes read split partial ...passed 00:26:49.257 Test: blockdev reset ...passed 00:26:49.257 Test: blockdev write read 8 blocks ...passed 00:26:49.257 Test: blockdev write read size > 128k ...passed 00:26:49.257 Test: blockdev write read invalid size ...passed 00:26:49.257 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:49.257 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:49.257 Test: blockdev write read max offset ...passed 00:26:49.257 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:49.257 Test: blockdev writev readv 8 blocks ...passed 00:26:49.257 Test: blockdev writev readv 30 x 1block ...passed 00:26:49.257 Test: blockdev writev readv block ...passed 00:26:49.257 Test: blockdev writev readv size > 128k ...passed 00:26:49.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:49.257 Test: blockdev comparev and writev ...passed 00:26:49.257 Test: blockdev nvme passthru rw ...passed 00:26:49.257 Test: blockdev nvme passthru vendor specific ...passed 00:26:49.257 Test: blockdev nvme admin passthru ...passed 00:26:49.257 Test: blockdev copy ...passed 00:26:49.257 Suite: bdevio tests on: crypto_ram 00:26:49.257 Test: blockdev write read block ...passed 00:26:49.257 Test: blockdev write zeroes read block ...passed 00:26:49.257 Test: blockdev write zeroes read no split ...passed 00:26:49.257 Test: blockdev write zeroes read split ...passed 00:26:49.257 Test: blockdev write zeroes read split partial ...passed 00:26:49.257 Test: blockdev reset ...passed 00:26:49.257 Test: blockdev write read 8 blocks ...passed 00:26:49.257 Test: blockdev write read size > 128k ...passed 00:26:49.257 Test: blockdev write read invalid size ...passed 00:26:49.257 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:49.257 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:49.257 Test: blockdev write read max offset ...passed 00:26:49.257 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:49.257 Test: blockdev writev readv 8 blocks ...passed 00:26:49.257 Test: blockdev writev readv 30 x 1block ...passed 00:26:49.257 Test: blockdev writev readv block ...passed 00:26:49.257 Test: blockdev writev readv size > 128k ...passed 00:26:49.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:49.257 Test: blockdev comparev and writev ...passed 00:26:49.257 Test: blockdev nvme passthru rw ...passed 00:26:49.257 Test: blockdev nvme passthru vendor specific ...passed 00:26:49.257 Test: blockdev nvme admin passthru ...passed 00:26:49.257 Test: blockdev copy ...passed 00:26:49.257 00:26:49.257 Run Summary: Type Total Ran Passed Failed Inactive 00:26:49.257 suites 2 2 n/a 0 0 00:26:49.257 tests 46 46 46 0 0 00:26:49.257 asserts 260 260 260 0 n/a 00:26:49.257 00:26:49.257 Elapsed time = 0.074 seconds 00:26:49.257 0 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1941146 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1941146 ']' 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1941146 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1941146 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1941146' 00:26:49.257 killing process with pid 1941146 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1941146 00:26:49.257 10:34:13 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1941146 00:26:49.516 10:34:14 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:26:49.516 00:26:49.516 real 0m1.223s 00:26:49.516 user 0m3.202s 00:26:49.516 sys 0m0.323s 00:26:49.516 10:34:14 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.516 10:34:14 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:49.516 ************************************ 00:26:49.516 END TEST bdev_bounds 00:26:49.517 ************************************ 00:26:49.517 10:34:14 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:49.517 10:34:14 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:26:49.517 10:34:14 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:49.517 10:34:14 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.517 10:34:14 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:49.517 ************************************ 00:26:49.517 START TEST bdev_nbd 00:26:49.517 ************************************ 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1941331 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1941331 /var/tmp/spdk-nbd.sock 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1941331 ']' 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:49.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:49.517 10:34:14 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:49.517 [2024-07-15 10:34:14.275889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:49.517 [2024-07-15 10:34:14.275939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:49.776 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.776 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:49.777 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:49.777 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:49.777 [2024-07-15 10:34:14.367289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.777 [2024-07-15 10:34:14.439647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.036 [2024-07-15 10:34:14.598590] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:26:50.036 [2024-07-15 10:34:14.598636] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:26:50.036 [2024-07-15 10:34:14.598646] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:50.036 [2024-07-15 10:34:14.606612] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:26:50.036 [2024-07-15 10:34:14.606628] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:26:50.036 [2024-07-15 10:34:14.606636] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:50.036 [2024-07-15 10:34:14.614632] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:26:50.036 [2024-07-15 10:34:14.614644] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:26:50.036 [2024-07-15 10:34:14.614652] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:50.295 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:50.554 1+0 records in 00:26:50.554 1+0 records out 00:26:50.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000124612 s, 32.9 MB/s 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:50.554 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:50.813 1+0 records in 00:26:50.813 1+0 records out 00:26:50.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309954 s, 13.2 MB/s 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:26:50.813 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:51.084 { 00:26:51.084 "nbd_device": "/dev/nbd0", 00:26:51.084 "bdev_name": "crypto_ram" 00:26:51.084 }, 00:26:51.084 { 00:26:51.084 "nbd_device": "/dev/nbd1", 00:26:51.084 "bdev_name": "crypto_ram3" 00:26:51.084 } 00:26:51.084 ]' 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:51.084 { 00:26:51.084 "nbd_device": "/dev/nbd0", 00:26:51.084 "bdev_name": "crypto_ram" 00:26:51.084 }, 00:26:51.084 { 00:26:51.084 "nbd_device": "/dev/nbd1", 00:26:51.084 "bdev_name": "crypto_ram3" 00:26:51.084 } 00:26:51.084 ]' 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:51.084 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:51.342 10:34:15 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:51.342 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:51.342 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:51.342 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:51.342 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:51.342 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:51.343 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:51.343 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:51.343 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:51.343 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:51.343 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.343 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:51.602 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:26:51.861 /dev/nbd0 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:51.861 1+0 records in 00:26:51.861 1+0 records out 00:26:51.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289624 s, 14.1 MB/s 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:51.861 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:26:52.120 /dev/nbd1 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.120 1+0 records in 00:26:52.120 1+0 records out 00:26:52.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284636 s, 14.4 MB/s 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:52.120 { 00:26:52.120 "nbd_device": "/dev/nbd0", 00:26:52.120 "bdev_name": "crypto_ram" 00:26:52.120 }, 00:26:52.120 { 00:26:52.120 "nbd_device": "/dev/nbd1", 00:26:52.120 "bdev_name": "crypto_ram3" 00:26:52.120 } 00:26:52.120 ]' 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:52.120 { 00:26:52.120 "nbd_device": "/dev/nbd0", 00:26:52.120 "bdev_name": "crypto_ram" 00:26:52.120 }, 00:26:52.120 { 00:26:52.120 "nbd_device": "/dev/nbd1", 00:26:52.120 "bdev_name": "crypto_ram3" 00:26:52.120 } 00:26:52.120 ]' 00:26:52.120 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:52.379 /dev/nbd1' 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:52.379 /dev/nbd1' 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:52.379 256+0 records in 00:26:52.379 256+0 records out 00:26:52.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114327 s, 91.7 MB/s 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:52.379 256+0 records in 00:26:52.379 256+0 records out 00:26:52.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203292 s, 51.6 MB/s 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:52.379 10:34:16 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:52.379 256+0 records in 00:26:52.379 256+0 records out 00:26:52.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243636 s, 43.0 MB/s 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.379 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.638 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:26:52.897 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:53.157 malloc_lvol_verify 00:26:53.157 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:53.415 99a4962f-2109-483f-ae2b-dce7d314c460 00:26:53.415 10:34:17 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:53.415 a889edb5-9ded-43b5-9113-6507e9da7c6b 00:26:53.416 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:53.674 /dev/nbd0 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:26:53.674 mke2fs 1.46.5 (30-Dec-2021) 00:26:53.674 Discarding device blocks: 0/4096 done 00:26:53.674 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:53.674 00:26:53.674 Allocating group tables: 0/1 done 00:26:53.674 Writing inode tables: 0/1 done 00:26:53.674 Creating journal (1024 blocks): done 00:26:53.674 Writing superblocks and filesystem accounting information: 0/1 done 00:26:53.674 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:53.674 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1941331 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1941331 ']' 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1941331 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1941331 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1941331' 00:26:53.933 killing process with pid 1941331 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1941331 00:26:53.933 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1941331 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:26:54.191 00:26:54.191 real 0m4.534s 00:26:54.191 user 0m6.252s 00:26:54.191 sys 0m1.874s 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:54.191 ************************************ 00:26:54.191 END TEST bdev_nbd 00:26:54.191 ************************************ 00:26:54.191 10:34:18 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:26:54.191 10:34:18 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:26:54.191 10:34:18 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = nvme ']' 00:26:54.191 10:34:18 blockdev_crypto_sw -- bdev/blockdev.sh@764 -- # '[' crypto_sw = gpt ']' 00:26:54.191 10:34:18 blockdev_crypto_sw -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:26:54.191 10:34:18 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:54.191 10:34:18 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.191 10:34:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:26:54.191 ************************************ 00:26:54.191 START TEST bdev_fio 00:26:54.191 ************************************ 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:26:54.191 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:54.191 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:54.192 ************************************ 00:26:54.192 START TEST bdev_fio_rw_verify 00:26:54.192 ************************************ 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:54.192 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:54.476 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:54.476 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:54.476 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:54.476 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:26:54.476 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:54.476 10:34:18 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:54.476 10:34:19 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:54.476 10:34:19 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:54.476 10:34:19 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:54.476 10:34:19 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:26:54.743 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:54.743 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:54.743 fio-3.35 00:26:54.743 Starting 2 threads 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:54.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:54.743 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:06.941 00:27:06.941 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1942507: Mon Jul 15 10:34:29 2024 00:27:06.941 read: IOPS=23.4k, BW=91.4MiB/s (95.9MB/s)(914MiB/10000msec) 00:27:06.941 slat (usec): min=8, max=357, avg=21.25, stdev= 8.33 00:27:06.941 clat (usec): min=5, max=636, avg=148.03, stdev=72.12 00:27:06.941 lat (usec): min=18, max=663, avg=169.29, stdev=75.82 00:27:06.941 clat percentiles (usec): 00:27:06.941 | 50.000th=[ 139], 99.000th=[ 334], 99.900th=[ 371], 99.990th=[ 400], 00:27:06.941 | 99.999th=[ 457] 00:27:06.941 write: IOPS=28.2k, BW=110MiB/s (115MB/s)(1042MiB/9473msec); 0 zone resets 00:27:06.941 slat (usec): min=9, max=1978, avg=31.48, stdev= 9.78 00:27:06.941 clat (usec): min=10, max=949, avg=184.97, stdev=92.61 00:27:06.941 lat (usec): min=35, max=2349, avg=216.45, stdev=96.10 00:27:06.941 clat percentiles (usec): 00:27:06.941 | 50.000th=[ 176], 99.000th=[ 408], 99.900th=[ 478], 99.990th=[ 701], 00:27:06.941 | 99.999th=[ 865] 00:27:06.941 bw ( KiB/s): min=91696, max=145176, per=94.68%, avg=106631.16, stdev=6726.80, samples=38 00:27:06.941 iops : min=22924, max=36294, avg=26657.79, stdev=1681.70, samples=38 00:27:06.941 lat (usec) : 10=0.01%, 20=0.01%, 50=5.93%, 100=18.76%, 250=57.45% 00:27:06.941 lat (usec) : 500=17.82%, 750=0.02%, 1000=0.01% 00:27:06.941 cpu : usr=99.67%, sys=0.01%, ctx=31, majf=0, minf=586 00:27:06.941 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:06.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.941 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.941 issued rwts: total=234096,266723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.941 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:06.941 00:27:06.941 Run status group 0 (all jobs): 00:27:06.941 READ: bw=91.4MiB/s (95.9MB/s), 91.4MiB/s-91.4MiB/s (95.9MB/s-95.9MB/s), io=914MiB (959MB), run=10000-10000msec 00:27:06.941 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1042MiB (1092MB), run=9473-9473msec 00:27:06.941 00:27:06.941 real 0m11.031s 00:27:06.941 user 0m29.098s 00:27:06.941 sys 0m0.353s 00:27:06.941 10:34:29 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:06.941 10:34:29 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:27:06.941 ************************************ 00:27:06.941 END TEST bdev_fio_rw_verify 00:27:06.941 ************************************ 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:27:06.941 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "56ad1996-b8e7-5c1d-843f-a507a519823e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "56ad1996-b8e7-5c1d-843f-a507a519823e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ce82ccf2-eeb3-54d8-aa5b-1e785bdc9ff9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ce82ccf2-eeb3-54d8-aa5b-1e785bdc9ff9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:27:06.942 crypto_ram3 ]] 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "56ad1996-b8e7-5c1d-843f-a507a519823e"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "56ad1996-b8e7-5c1d-843f-a507a519823e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ce82ccf2-eeb3-54d8-aa5b-1e785bdc9ff9"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ce82ccf2-eeb3-54d8-aa5b-1e785bdc9ff9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:06.942 ************************************ 00:27:06.942 START TEST bdev_fio_trim 00:27:06.942 ************************************ 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:06.942 10:34:30 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:06.942 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:06.942 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:06.942 fio-3.35 00:27:06.942 Starting 2 threads 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.942 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:06.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:06.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.943 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:16.974 00:27:16.974 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1944501: Mon Jul 15 10:34:40 2024 00:27:16.974 write: IOPS=57.4k, BW=224MiB/s (235MB/s)(2244MiB/10001msec); 0 zone resets 00:27:16.974 slat (usec): min=9, max=1387, avg=15.72, stdev= 4.12 00:27:16.974 clat (usec): min=24, max=1514, avg=113.45, stdev=62.80 00:27:16.974 lat (usec): min=34, max=1529, avg=129.17, stdev=65.40 00:27:16.974 clat percentiles (usec): 00:27:16.974 | 50.000th=[ 92], 99.000th=[ 237], 99.900th=[ 269], 99.990th=[ 449], 00:27:16.974 | 99.999th=[ 742] 00:27:16.974 bw ( KiB/s): min=219632, max=234112, per=99.93%, avg=229621.05, stdev=1808.75, samples=38 00:27:16.974 iops : min=54908, max=58528, avg=57405.26, stdev=452.19, samples=38 00:27:16.974 trim: IOPS=57.4k, BW=224MiB/s (235MB/s)(2244MiB/10001msec); 0 zone resets 00:27:16.974 slat (usec): min=3, max=297, avg= 6.92, stdev= 1.98 00:27:16.974 clat (usec): min=29, max=1530, avg=75.71, stdev=23.27 00:27:16.974 lat (usec): min=34, max=1536, avg=82.64, stdev=23.46 00:27:16.974 clat percentiles (usec): 00:27:16.974 | 50.000th=[ 75], 99.000th=[ 127], 99.900th=[ 147], 99.990th=[ 265], 00:27:16.974 | 99.999th=[ 420] 00:27:16.974 bw ( KiB/s): min=219656, max=234112, per=99.93%, avg=229622.32, stdev=1807.66, samples=38 00:27:16.974 iops : min=54914, max=58528, avg=57405.58, stdev=451.91, samples=38 00:27:16.975 lat (usec) : 50=16.88%, 100=52.54%, 250=30.36%, 500=0.21%, 750=0.01% 00:27:16.975 lat (usec) : 1000=0.01% 00:27:16.975 lat (msec) : 2=0.01% 00:27:16.975 cpu : usr=99.72%, sys=0.00%, ctx=20, majf=0, minf=339 00:27:16.975 IO depths : 1=7.5%, 2=17.5%, 4=60.0%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.975 complete : 0=0.0%, 4=87.0%, 8=13.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.975 issued rwts: total=0,574489,574489,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.975 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:16.975 00:27:16.975 Run status group 0 (all jobs): 00:27:16.975 WRITE: bw=224MiB/s (235MB/s), 224MiB/s-224MiB/s (235MB/s-235MB/s), io=2244MiB (2353MB), run=10001-10001msec 00:27:16.975 TRIM: bw=224MiB/s (235MB/s), 224MiB/s-224MiB/s (235MB/s-235MB/s), io=2244MiB (2353MB), run=10001-10001msec 00:27:16.975 00:27:16.975 real 0m11.006s 00:27:16.975 user 0m28.903s 00:27:16.975 sys 0m0.332s 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:27:16.975 ************************************ 00:27:16.975 END TEST bdev_fio_trim 00:27:16.975 ************************************ 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:27:16.975 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:27:16.975 00:27:16.975 real 0m22.365s 00:27:16.975 user 0m58.166s 00:27:16.975 sys 0m0.866s 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:16.975 ************************************ 00:27:16.975 END TEST bdev_fio 00:27:16.975 ************************************ 00:27:16.975 10:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:16.975 10:34:41 blockdev_crypto_sw -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:16.975 10:34:41 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:16.975 10:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:27:16.975 10:34:41 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.975 10:34:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:16.975 ************************************ 00:27:16.975 START TEST bdev_verify 00:27:16.975 ************************************ 00:27:16.975 10:34:41 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:16.975 [2024-07-15 10:34:41.325466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:16.975 [2024-07-15 10:34:41.325510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1946167 ] 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:16.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:16.975 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:16.975 [2024-07-15 10:34:41.417213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:16.975 [2024-07-15 10:34:41.488613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.975 [2024-07-15 10:34:41.488616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.975 [2024-07-15 10:34:41.647503] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:16.975 [2024-07-15 10:34:41.647562] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:16.975 [2024-07-15 10:34:41.647572] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:16.975 [2024-07-15 10:34:41.655521] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:16.975 [2024-07-15 10:34:41.655533] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:16.975 [2024-07-15 10:34:41.655540] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:16.975 [2024-07-15 10:34:41.663545] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:16.975 [2024-07-15 10:34:41.663557] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:16.975 [2024-07-15 10:34:41.663565] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:16.975 Running I/O for 5 seconds... 00:27:22.240 00:27:22.240 Latency(us) 00:27:22.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.240 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:22.240 Verification LBA range: start 0x0 length 0x800 00:27:22.240 crypto_ram : 5.02 8083.86 31.58 0.00 0.00 15775.77 1304.17 21076.38 00:27:22.240 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:22.240 Verification LBA range: start 0x800 length 0x800 00:27:22.240 crypto_ram : 5.02 8084.50 31.58 0.00 0.00 15773.78 1363.15 21076.38 00:27:22.240 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:22.241 Verification LBA range: start 0x0 length 0x800 00:27:22.241 crypto_ram3 : 5.03 4049.59 15.82 0.00 0.00 31475.90 1540.10 24641.54 00:27:22.241 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:22.241 Verification LBA range: start 0x800 length 0x800 00:27:22.241 crypto_ram3 : 5.03 4049.93 15.82 0.00 0.00 31471.51 1599.08 24641.54 00:27:22.241 =================================================================================================================== 00:27:22.241 Total : 24267.88 94.80 0.00 0.00 21018.74 1304.17 24641.54 00:27:22.241 00:27:22.241 real 0m5.657s 00:27:22.241 user 0m10.784s 00:27:22.241 sys 0m0.184s 00:27:22.241 10:34:46 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.241 10:34:46 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:22.241 ************************************ 00:27:22.241 END TEST bdev_verify 00:27:22.241 ************************************ 00:27:22.241 10:34:46 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:22.241 10:34:46 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:22.241 10:34:46 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:27:22.241 10:34:46 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.241 10:34:46 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:22.241 ************************************ 00:27:22.241 START TEST bdev_verify_big_io 00:27:22.241 ************************************ 00:27:22.241 10:34:47 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:22.499 [2024-07-15 10:34:47.064626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:22.499 [2024-07-15 10:34:47.064669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947182 ] 00:27:22.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.499 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:22.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.499 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:22.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:22.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:22.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:22.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:22.500 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:22.500 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:22.500 [2024-07-15 10:34:47.153734] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:22.500 [2024-07-15 10:34:47.223427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.500 [2024-07-15 10:34:47.223430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.759 [2024-07-15 10:34:47.376707] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:22.759 [2024-07-15 10:34:47.376764] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:22.759 [2024-07-15 10:34:47.376774] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:22.759 [2024-07-15 10:34:47.384727] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:22.759 [2024-07-15 10:34:47.384739] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:22.759 [2024-07-15 10:34:47.384747] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:22.759 [2024-07-15 10:34:47.392760] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:22.759 [2024-07-15 10:34:47.392772] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:22.759 [2024-07-15 10:34:47.392780] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:22.759 Running I/O for 5 seconds... 00:27:28.025 00:27:28.025 Latency(us) 00:27:28.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.025 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:28.025 Verification LBA range: start 0x0 length 0x80 00:27:28.025 crypto_ram : 5.12 624.55 39.03 0.00 0.00 201206.74 6370.10 268435.46 00:27:28.025 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:28.025 Verification LBA range: start 0x80 length 0x80 00:27:28.025 crypto_ram : 5.12 624.58 39.04 0.00 0.00 201178.62 6396.31 268435.46 00:27:28.025 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:28.025 Verification LBA range: start 0x0 length 0x80 00:27:28.025 crypto_ram3 : 5.27 339.88 21.24 0.00 0.00 360594.14 4797.24 270113.18 00:27:28.025 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:28.025 Verification LBA range: start 0x80 length 0x80 00:27:28.025 crypto_ram3 : 5.27 339.91 21.24 0.00 0.00 360472.20 4849.66 270113.18 00:27:28.025 =================================================================================================================== 00:27:28.025 Total : 1928.92 120.56 0.00 0.00 258391.83 4797.24 270113.18 00:27:28.284 00:27:28.284 real 0m5.902s 00:27:28.284 user 0m11.282s 00:27:28.284 sys 0m0.185s 00:27:28.284 10:34:52 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.284 10:34:52 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.284 ************************************ 00:27:28.284 END TEST bdev_verify_big_io 00:27:28.284 ************************************ 00:27:28.284 10:34:52 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:28.284 10:34:52 blockdev_crypto_sw -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:28.284 10:34:52 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:28.284 10:34:52 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.284 10:34:52 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:28.284 ************************************ 00:27:28.284 START TEST bdev_write_zeroes 00:27:28.284 ************************************ 00:27:28.284 10:34:52 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:28.284 [2024-07-15 10:34:53.050774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:28.284 [2024-07-15 10:34:53.050814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948215 ] 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:28.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:28.543 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:28.543 [2024-07-15 10:34:53.139324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.543 [2024-07-15 10:34:53.208129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.801 [2024-07-15 10:34:53.365219] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:28.801 [2024-07-15 10:34:53.365267] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:28.801 [2024-07-15 10:34:53.365277] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:28.801 [2024-07-15 10:34:53.373239] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:27:28.801 [2024-07-15 10:34:53.373252] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:28.801 [2024-07-15 10:34:53.373260] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:28.801 [2024-07-15 10:34:53.381260] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:27:28.801 [2024-07-15 10:34:53.381271] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:27:28.801 [2024-07-15 10:34:53.381279] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:28.801 Running I/O for 1 seconds... 00:27:29.737 00:27:29.737 Latency(us) 00:27:29.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.737 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:29.737 crypto_ram : 1.00 43565.27 170.18 0.00 0.00 2931.38 789.71 4272.95 00:27:29.737 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:29.737 crypto_ram3 : 1.01 21755.62 84.98 0.00 0.00 5856.06 3643.80 6422.53 00:27:29.737 =================================================================================================================== 00:27:29.737 Total : 65320.89 255.16 0.00 0.00 3906.27 789.71 6422.53 00:27:29.996 00:27:29.996 real 0m1.608s 00:27:29.996 user 0m1.402s 00:27:29.996 sys 0m0.187s 00:27:29.996 10:34:54 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.996 10:34:54 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:29.996 ************************************ 00:27:29.996 END TEST bdev_write_zeroes 00:27:29.996 ************************************ 00:27:29.996 10:34:54 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:29.996 10:34:54 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:29.996 10:34:54 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:29.996 10:34:54 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.996 10:34:54 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:29.996 ************************************ 00:27:29.996 START TEST bdev_json_nonenclosed 00:27:29.996 ************************************ 00:27:29.996 10:34:54 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:29.996 [2024-07-15 10:34:54.740324] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:29.996 [2024-07-15 10:34:54.740362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948532 ] 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:30.254 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.254 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:30.254 [2024-07-15 10:34:54.827711] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.254 [2024-07-15 10:34:54.896479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.254 [2024-07-15 10:34:54.896535] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:30.254 [2024-07-15 10:34:54.896549] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:30.254 [2024-07-15 10:34:54.896557] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:30.254 00:27:30.254 real 0m0.277s 00:27:30.254 user 0m0.160s 00:27:30.254 sys 0m0.116s 00:27:30.254 10:34:54 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:27:30.254 10:34:54 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.254 10:34:54 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:30.254 ************************************ 00:27:30.254 END TEST bdev_json_nonenclosed 00:27:30.254 ************************************ 00:27:30.254 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:27:30.254 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@782 -- # true 00:27:30.254 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:30.254 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:27:30.254 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.254 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:30.512 ************************************ 00:27:30.512 START TEST bdev_json_nonarray 00:27:30.512 ************************************ 00:27:30.512 10:34:55 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:30.512 [2024-07-15 10:34:55.104859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:30.512 [2024-07-15 10:34:55.104919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948556 ] 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:30.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.512 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:30.512 [2024-07-15 10:34:55.195527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.512 [2024-07-15 10:34:55.264113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.512 [2024-07-15 10:34:55.264177] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:30.512 [2024-07-15 10:34:55.264191] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:30.512 [2024-07-15 10:34:55.264200] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:30.771 00:27:30.771 real 0m0.286s 00:27:30.771 user 0m0.172s 00:27:30.771 sys 0m0.112s 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:30.771 ************************************ 00:27:30.771 END TEST bdev_json_nonarray 00:27:30.771 ************************************ 00:27:30.771 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 234 00:27:30.771 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@785 -- # true 00:27:30.771 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@787 -- # [[ crypto_sw == bdev ]] 00:27:30.771 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@794 -- # [[ crypto_sw == gpt ]] 00:27:30.771 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # [[ crypto_sw == crypto_sw ]] 00:27:30.771 10:34:55 blockdev_crypto_sw -- bdev/blockdev.sh@799 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:27:30.771 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:30.771 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.771 10:34:55 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:30.771 ************************************ 00:27:30.771 START TEST bdev_crypto_enomem 00:27:30.771 ************************************ 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1123 -- # bdev_crypto_enomem 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local base_dev=base0 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local test_dev=crypt0 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local err_dev=EE_base0 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@638 -- # local qd=32 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # ERR_PID=1948580 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@643 -- # waitforlisten 1948580 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@829 -- # '[' -z 1948580 ']' 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.771 10:34:55 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:30.771 [2024-07-15 10:34:55.480678] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:30.771 [2024-07-15 10:34:55.480724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948580 ] 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:30.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:30.771 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:31.029 [2024-07-15 10:34:55.571580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.029 [2024-07-15 10:34:55.637642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@862 -- # return 0 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@645 -- # rpc_cmd 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:31.594 true 00:27:31.594 base0 00:27:31.594 true 00:27:31.594 [2024-07-15 10:34:56.305697] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:27:31.594 crypt0 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@652 -- # waitforbdev crypt0 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@897 -- # local bdev_name=crypt0 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local i 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.594 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:31.594 [ 00:27:31.594 { 00:27:31.594 "name": "crypt0", 00:27:31.594 "aliases": [ 00:27:31.594 "4f2c914d-21ed-5aa9-8c94-4117f7910c6a" 00:27:31.594 ], 00:27:31.594 "product_name": "crypto", 00:27:31.594 "block_size": 512, 00:27:31.594 "num_blocks": 2097152, 00:27:31.594 "uuid": "4f2c914d-21ed-5aa9-8c94-4117f7910c6a", 00:27:31.594 "assigned_rate_limits": { 00:27:31.594 "rw_ios_per_sec": 0, 00:27:31.594 "rw_mbytes_per_sec": 0, 00:27:31.594 "r_mbytes_per_sec": 0, 00:27:31.594 "w_mbytes_per_sec": 0 00:27:31.594 }, 00:27:31.594 "claimed": false, 00:27:31.594 "zoned": false, 00:27:31.594 "supported_io_types": { 00:27:31.594 "read": true, 00:27:31.594 "write": true, 00:27:31.594 "unmap": false, 00:27:31.594 "flush": false, 00:27:31.594 "reset": true, 00:27:31.594 "nvme_admin": false, 00:27:31.594 "nvme_io": false, 00:27:31.594 "nvme_io_md": false, 00:27:31.594 "write_zeroes": true, 00:27:31.594 "zcopy": false, 00:27:31.594 "get_zone_info": false, 00:27:31.594 "zone_management": false, 00:27:31.594 "zone_append": false, 00:27:31.594 "compare": false, 00:27:31.594 "compare_and_write": false, 00:27:31.594 "abort": false, 00:27:31.594 "seek_hole": false, 00:27:31.594 "seek_data": false, 00:27:31.594 "copy": false, 00:27:31.594 "nvme_iov_md": false 00:27:31.594 }, 00:27:31.594 "memory_domains": [ 00:27:31.594 { 00:27:31.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.594 "dma_device_type": 2 00:27:31.594 } 00:27:31.594 ], 00:27:31.594 "driver_specific": { 00:27:31.595 "crypto": { 00:27:31.595 "base_bdev_name": "EE_base0", 00:27:31.595 "name": "crypt0", 00:27:31.595 "key_name": "test_dek_sw" 00:27:31.595 } 00:27:31.595 } 00:27:31.595 } 00:27:31.595 ] 00:27:31.595 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.595 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@905 -- # return 0 00:27:31.595 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@655 -- # rpcpid=1948841 00:27:31.595 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # sleep 1 00:27:31.595 10:34:56 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:31.900 Running I/O for 5 seconds... 00:27:32.833 10:34:57 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:27:32.833 10:34:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.833 10:34:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:32.833 10:34:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.833 10:34:57 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@660 -- # wait 1948841 00:27:37.018 00:27:37.018 Latency(us) 00:27:37.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.018 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:27:37.018 crypt0 : 5.00 58673.56 229.19 0.00 0.00 543.30 250.68 776.60 00:27:37.018 =================================================================================================================== 00:27:37.018 Total : 58673.56 229.19 0.00 0.00 543.30 250.68 776.60 00:27:37.018 0 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@662 -- # rpc_cmd bdev_crypto_delete crypt0 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # killprocess 1948580 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@948 -- # '[' -z 1948580 ']' 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@952 -- # kill -0 1948580 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # uname 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1948580 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1948580' 00:27:37.018 killing process with pid 1948580 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@967 -- # kill 1948580 00:27:37.018 Received shutdown signal, test time was about 5.000000 seconds 00:27:37.018 00:27:37.018 Latency(us) 00:27:37.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.018 =================================================================================================================== 00:27:37.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@972 -- # wait 1948580 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@665 -- # trap - SIGINT SIGTERM EXIT 00:27:37.018 00:27:37.018 real 0m6.257s 00:27:37.018 user 0m6.410s 00:27:37.018 sys 0m0.327s 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.018 10:35:01 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:27:37.018 ************************************ 00:27:37.018 END TEST bdev_crypto_enomem 00:27:37.018 ************************************ 00:27:37.018 10:35:01 blockdev_crypto_sw -- common/autotest_common.sh@1142 -- # return 0 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@811 -- # cleanup 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:27:37.018 10:35:01 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:27:37.018 00:27:37.018 real 0m51.340s 00:27:37.018 user 1m40.366s 00:27:37.018 sys 0m5.499s 00:27:37.018 10:35:01 blockdev_crypto_sw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.018 10:35:01 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:27:37.018 ************************************ 00:27:37.018 END TEST blockdev_crypto_sw 00:27:37.018 ************************************ 00:27:37.018 10:35:01 -- common/autotest_common.sh@1142 -- # return 0 00:27:37.018 10:35:01 -- spdk/autotest.sh@359 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:37.018 10:35:01 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.018 10:35:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.018 10:35:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.286 ************************************ 00:27:37.286 START TEST blockdev_crypto_qat 00:27:37.286 ************************************ 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:27:37.286 * Looking for test storage... 00:27:37.286 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # uname -s 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # test_type=crypto_qat 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # crypto_device= 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # dek= 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # env_ctx= 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == bdev ]] 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # [[ crypto_qat == crypto_* ]] 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1949704 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 1949704 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@829 -- # '[' -z 1949704 ']' 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:37.286 10:35:01 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:37.286 10:35:01 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:27:37.286 [2024-07-15 10:35:01.999519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:37.286 [2024-07-15 10:35:01.999570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1949704 ] 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:37.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.286 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:37.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.287 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:37.569 [2024-07-15 10:35:02.092479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.569 [2024-07-15 10:35:02.166612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.137 10:35:02 blockdev_crypto_qat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.137 10:35:02 blockdev_crypto_qat -- common/autotest_common.sh@862 -- # return 0 00:27:38.137 10:35:02 blockdev_crypto_qat -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:27:38.137 10:35:02 blockdev_crypto_qat -- bdev/blockdev.sh@708 -- # setup_crypto_qat_conf 00:27:38.137 10:35:02 blockdev_crypto_qat -- bdev/blockdev.sh@170 -- # rpc_cmd 00:27:38.137 10:35:02 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.137 10:35:02 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:38.137 [2024-07-15 10:35:02.784505] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:38.137 [2024-07-15 10:35:02.792534] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:38.137 [2024-07-15 10:35:02.800550] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:38.137 [2024-07-15 10:35:02.860602] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:40.667 true 00:27:40.667 true 00:27:40.667 true 00:27:40.667 true 00:27:40.667 Malloc0 00:27:40.667 Malloc1 00:27:40.667 Malloc2 00:27:40.667 Malloc3 00:27:40.667 [2024-07-15 10:35:05.138043] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:40.667 crypto_ram 00:27:40.667 [2024-07-15 10:35:05.146059] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:40.667 crypto_ram1 00:27:40.667 [2024-07-15 10:35:05.154080] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:40.667 crypto_ram2 00:27:40.667 [2024-07-15 10:35:05.162102] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:40.667 crypto_ram3 00:27:40.667 [ 00:27:40.667 { 00:27:40.667 "name": "Malloc1", 00:27:40.667 "aliases": [ 00:27:40.667 "faf41531-1c9d-4a37-afae-8b2ef9b08b2f" 00:27:40.667 ], 00:27:40.667 "product_name": "Malloc disk", 00:27:40.667 "block_size": 512, 00:27:40.667 "num_blocks": 65536, 00:27:40.667 "uuid": "faf41531-1c9d-4a37-afae-8b2ef9b08b2f", 00:27:40.667 "assigned_rate_limits": { 00:27:40.667 "rw_ios_per_sec": 0, 00:27:40.667 "rw_mbytes_per_sec": 0, 00:27:40.667 "r_mbytes_per_sec": 0, 00:27:40.667 "w_mbytes_per_sec": 0 00:27:40.667 }, 00:27:40.667 "claimed": true, 00:27:40.667 "claim_type": "exclusive_write", 00:27:40.667 "zoned": false, 00:27:40.667 "supported_io_types": { 00:27:40.667 "read": true, 00:27:40.667 "write": true, 00:27:40.667 "unmap": true, 00:27:40.667 "flush": true, 00:27:40.667 "reset": true, 00:27:40.667 "nvme_admin": false, 00:27:40.667 "nvme_io": false, 00:27:40.667 "nvme_io_md": false, 00:27:40.667 "write_zeroes": true, 00:27:40.667 "zcopy": true, 00:27:40.667 "get_zone_info": false, 00:27:40.667 "zone_management": false, 00:27:40.667 "zone_append": false, 00:27:40.667 "compare": false, 00:27:40.667 "compare_and_write": false, 00:27:40.667 "abort": true, 00:27:40.667 "seek_hole": false, 00:27:40.667 "seek_data": false, 00:27:40.667 "copy": true, 00:27:40.667 "nvme_iov_md": false 00:27:40.667 }, 00:27:40.667 "memory_domains": [ 00:27:40.667 { 00:27:40.667 "dma_device_id": "system", 00:27:40.667 "dma_device_type": 1 00:27:40.668 }, 00:27:40.668 { 00:27:40.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.668 "dma_device_type": 2 00:27:40.668 } 00:27:40.668 ], 00:27:40.668 "driver_specific": {} 00:27:40.668 } 00:27:40.668 ] 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # cat 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # jq -r .name 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "76603e26-c962-56fd-a73b-c17c47642a23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "76603e26-c962-56fd-a73b-c17c47642a23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "dddf5f2d-c9f4-50be-8850-a46dcedbd9cd"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dddf5f2d-c9f4-50be-8850-a46dcedbd9cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "9a877bf7-83b0-5ba1-93fc-7fbc16452a48"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9a877bf7-83b0-5ba1-93fc-7fbc16452a48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f48f6f93-a63a-555c-be96-c586d87da99b"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f48f6f93-a63a-555c-be96-c586d87da99b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # hello_world_bdev=crypto_ram 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:27:40.668 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@754 -- # killprocess 1949704 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@948 -- # '[' -z 1949704 ']' 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@952 -- # kill -0 1949704 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # uname 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1949704 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1949704' 00:27:40.668 killing process with pid 1949704 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@967 -- # kill 1949704 00:27:40.668 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@972 -- # wait 1949704 00:27:41.236 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:41.236 10:35:05 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:41.236 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:41.236 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.236 10:35:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:41.236 ************************************ 00:27:41.236 START TEST bdev_hello_world 00:27:41.236 ************************************ 00:27:41.236 10:35:05 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:27:41.236 [2024-07-15 10:35:05.935237] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:41.236 [2024-07-15 10:35:05.935279] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950441 ] 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.236 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:41.236 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:41.237 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:41.237 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:41.237 [2024-07-15 10:35:06.024602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.496 [2024-07-15 10:35:06.094173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.496 [2024-07-15 10:35:06.115040] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:41.496 [2024-07-15 10:35:06.123065] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:41.496 [2024-07-15 10:35:06.131090] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:41.496 [2024-07-15 10:35:06.236159] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:44.025 [2024-07-15 10:35:08.370159] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:44.025 [2024-07-15 10:35:08.370210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:44.025 [2024-07-15 10:35:08.370220] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:44.025 [2024-07-15 10:35:08.378178] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:44.025 [2024-07-15 10:35:08.378190] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:44.025 [2024-07-15 10:35:08.378197] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:44.025 [2024-07-15 10:35:08.386198] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:44.025 [2024-07-15 10:35:08.386209] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:44.025 [2024-07-15 10:35:08.386216] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:44.025 [2024-07-15 10:35:08.394219] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:44.025 [2024-07-15 10:35:08.394229] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:44.025 [2024-07-15 10:35:08.394236] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:44.025 [2024-07-15 10:35:08.461395] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:44.025 [2024-07-15 10:35:08.461430] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:27:44.025 [2024-07-15 10:35:08.461441] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:44.025 [2024-07-15 10:35:08.462352] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:44.025 [2024-07-15 10:35:08.462404] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:44.025 [2024-07-15 10:35:08.462415] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:44.025 [2024-07-15 10:35:08.462444] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:44.025 00:27:44.025 [2024-07-15 10:35:08.462456] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:44.025 00:27:44.025 real 0m2.864s 00:27:44.025 user 0m2.542s 00:27:44.025 sys 0m0.281s 00:27:44.025 10:35:08 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.025 10:35:08 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:27:44.025 ************************************ 00:27:44.025 END TEST bdev_hello_world 00:27:44.025 ************************************ 00:27:44.025 10:35:08 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:44.025 10:35:08 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:27:44.025 10:35:08 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.025 10:35:08 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.025 10:35:08 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:44.283 ************************************ 00:27:44.283 START TEST bdev_bounds 00:27:44.283 ************************************ 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=1950881 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 1950881' 00:27:44.283 Process bdevio pid: 1950881 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 1950881 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 1950881 ']' 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:44.283 10:35:08 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:44.283 [2024-07-15 10:35:08.874710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:44.283 [2024-07-15 10:35:08.874757] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950881 ] 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:44.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:44.283 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:44.283 [2024-07-15 10:35:08.966946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.283 [2024-07-15 10:35:09.042535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.283 [2024-07-15 10:35:09.042632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.283 [2024-07-15 10:35:09.042634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.283 [2024-07-15 10:35:09.063594] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:44.284 [2024-07-15 10:35:09.071621] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:44.545 [2024-07-15 10:35:09.079639] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:44.545 [2024-07-15 10:35:09.181200] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:47.074 [2024-07-15 10:35:11.316400] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:47.074 [2024-07-15 10:35:11.316459] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:47.074 [2024-07-15 10:35:11.316487] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:47.074 [2024-07-15 10:35:11.324419] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:47.074 [2024-07-15 10:35:11.324433] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:47.074 [2024-07-15 10:35:11.324441] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:47.074 [2024-07-15 10:35:11.332443] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:47.074 [2024-07-15 10:35:11.332454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:47.074 [2024-07-15 10:35:11.332462] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:47.074 [2024-07-15 10:35:11.340464] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:47.074 [2024-07-15 10:35:11.340475] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:47.074 [2024-07-15 10:35:11.340482] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:47.074 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.074 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:27:47.074 10:35:11 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:47.074 I/O targets: 00:27:47.074 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:27:47.074 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:27:47.074 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:27:47.074 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:27:47.074 00:27:47.074 00:27:47.074 CUnit - A unit testing framework for C - Version 2.1-3 00:27:47.074 http://cunit.sourceforge.net/ 00:27:47.074 00:27:47.074 00:27:47.074 Suite: bdevio tests on: crypto_ram3 00:27:47.074 Test: blockdev write read block ...passed 00:27:47.074 Test: blockdev write zeroes read block ...passed 00:27:47.074 Test: blockdev write zeroes read no split ...passed 00:27:47.074 Test: blockdev write zeroes read split ...passed 00:27:47.074 Test: blockdev write zeroes read split partial ...passed 00:27:47.074 Test: blockdev reset ...passed 00:27:47.074 Test: blockdev write read 8 blocks ...passed 00:27:47.074 Test: blockdev write read size > 128k ...passed 00:27:47.074 Test: blockdev write read invalid size ...passed 00:27:47.074 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:47.074 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:47.074 Test: blockdev write read max offset ...passed 00:27:47.074 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:47.074 Test: blockdev writev readv 8 blocks ...passed 00:27:47.074 Test: blockdev writev readv 30 x 1block ...passed 00:27:47.074 Test: blockdev writev readv block ...passed 00:27:47.074 Test: blockdev writev readv size > 128k ...passed 00:27:47.074 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:47.074 Test: blockdev comparev and writev ...passed 00:27:47.074 Test: blockdev nvme passthru rw ...passed 00:27:47.074 Test: blockdev nvme passthru vendor specific ...passed 00:27:47.074 Test: blockdev nvme admin passthru ...passed 00:27:47.074 Test: blockdev copy ...passed 00:27:47.074 Suite: bdevio tests on: crypto_ram2 00:27:47.074 Test: blockdev write read block ...passed 00:27:47.074 Test: blockdev write zeroes read block ...passed 00:27:47.074 Test: blockdev write zeroes read no split ...passed 00:27:47.074 Test: blockdev write zeroes read split ...passed 00:27:47.074 Test: blockdev write zeroes read split partial ...passed 00:27:47.074 Test: blockdev reset ...passed 00:27:47.074 Test: blockdev write read 8 blocks ...passed 00:27:47.074 Test: blockdev write read size > 128k ...passed 00:27:47.075 Test: blockdev write read invalid size ...passed 00:27:47.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:47.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:47.075 Test: blockdev write read max offset ...passed 00:27:47.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:47.075 Test: blockdev writev readv 8 blocks ...passed 00:27:47.075 Test: blockdev writev readv 30 x 1block ...passed 00:27:47.075 Test: blockdev writev readv block ...passed 00:27:47.075 Test: blockdev writev readv size > 128k ...passed 00:27:47.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:47.075 Test: blockdev comparev and writev ...passed 00:27:47.075 Test: blockdev nvme passthru rw ...passed 00:27:47.075 Test: blockdev nvme passthru vendor specific ...passed 00:27:47.075 Test: blockdev nvme admin passthru ...passed 00:27:47.075 Test: blockdev copy ...passed 00:27:47.075 Suite: bdevio tests on: crypto_ram1 00:27:47.075 Test: blockdev write read block ...passed 00:27:47.075 Test: blockdev write zeroes read block ...passed 00:27:47.075 Test: blockdev write zeroes read no split ...passed 00:27:47.075 Test: blockdev write zeroes read split ...passed 00:27:47.075 Test: blockdev write zeroes read split partial ...passed 00:27:47.075 Test: blockdev reset ...passed 00:27:47.075 Test: blockdev write read 8 blocks ...passed 00:27:47.075 Test: blockdev write read size > 128k ...passed 00:27:47.075 Test: blockdev write read invalid size ...passed 00:27:47.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:47.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:47.075 Test: blockdev write read max offset ...passed 00:27:47.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:47.075 Test: blockdev writev readv 8 blocks ...passed 00:27:47.075 Test: blockdev writev readv 30 x 1block ...passed 00:27:47.075 Test: blockdev writev readv block ...passed 00:27:47.075 Test: blockdev writev readv size > 128k ...passed 00:27:47.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:47.075 Test: blockdev comparev and writev ...passed 00:27:47.075 Test: blockdev nvme passthru rw ...passed 00:27:47.075 Test: blockdev nvme passthru vendor specific ...passed 00:27:47.075 Test: blockdev nvme admin passthru ...passed 00:27:47.075 Test: blockdev copy ...passed 00:27:47.075 Suite: bdevio tests on: crypto_ram 00:27:47.075 Test: blockdev write read block ...passed 00:27:47.075 Test: blockdev write zeroes read block ...passed 00:27:47.075 Test: blockdev write zeroes read no split ...passed 00:27:47.075 Test: blockdev write zeroes read split ...passed 00:27:47.075 Test: blockdev write zeroes read split partial ...passed 00:27:47.075 Test: blockdev reset ...passed 00:27:47.075 Test: blockdev write read 8 blocks ...passed 00:27:47.075 Test: blockdev write read size > 128k ...passed 00:27:47.075 Test: blockdev write read invalid size ...passed 00:27:47.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:47.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:47.075 Test: blockdev write read max offset ...passed 00:27:47.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:47.075 Test: blockdev writev readv 8 blocks ...passed 00:27:47.075 Test: blockdev writev readv 30 x 1block ...passed 00:27:47.075 Test: blockdev writev readv block ...passed 00:27:47.075 Test: blockdev writev readv size > 128k ...passed 00:27:47.075 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:47.075 Test: blockdev comparev and writev ...passed 00:27:47.075 Test: blockdev nvme passthru rw ...passed 00:27:47.075 Test: blockdev nvme passthru vendor specific ...passed 00:27:47.075 Test: blockdev nvme admin passthru ...passed 00:27:47.075 Test: blockdev copy ...passed 00:27:47.075 00:27:47.075 Run Summary: Type Total Ran Passed Failed Inactive 00:27:47.075 suites 4 4 n/a 0 0 00:27:47.075 tests 92 92 92 0 0 00:27:47.075 asserts 520 520 520 0 n/a 00:27:47.075 00:27:47.075 Elapsed time = 0.495 seconds 00:27:47.075 0 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 1950881 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 1950881 ']' 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 1950881 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1950881 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1950881' 00:27:47.075 killing process with pid 1950881 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@967 -- # kill 1950881 00:27:47.075 10:35:11 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@972 -- # wait 1950881 00:27:47.334 10:35:12 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:27:47.334 00:27:47.334 real 0m3.293s 00:27:47.334 user 0m9.199s 00:27:47.334 sys 0m0.466s 00:27:47.334 10:35:12 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.334 10:35:12 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:27:47.334 ************************************ 00:27:47.334 END TEST bdev_bounds 00:27:47.334 ************************************ 00:27:47.593 10:35:12 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:47.593 10:35:12 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:27:47.593 10:35:12 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:47.593 10:35:12 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.593 10:35:12 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:47.593 ************************************ 00:27:47.593 START TEST bdev_nbd 00:27:47.593 ************************************ 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=4 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=4 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=1951508 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 1951508 /var/tmp/spdk-nbd.sock 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 1951508 ']' 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:47.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.593 10:35:12 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:47.593 [2024-07-15 10:35:12.260309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:47.593 [2024-07-15 10:35:12.260357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:47.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:47.593 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:47.593 [2024-07-15 10:35:12.355288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.852 [2024-07-15 10:35:12.426668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.852 [2024-07-15 10:35:12.447566] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:27:47.852 [2024-07-15 10:35:12.455599] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:27:47.852 [2024-07-15 10:35:12.463605] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:27:47.852 [2024-07-15 10:35:12.556146] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:27:50.384 [2024-07-15 10:35:14.687083] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:27:50.384 [2024-07-15 10:35:14.687156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:50.384 [2024-07-15 10:35:14.687167] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:50.384 [2024-07-15 10:35:14.695104] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:27:50.384 [2024-07-15 10:35:14.695118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:27:50.384 [2024-07-15 10:35:14.695126] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:50.384 [2024-07-15 10:35:14.703121] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:27:50.384 [2024-07-15 10:35:14.703132] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:27:50.384 [2024-07-15 10:35:14.703139] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:50.384 [2024-07-15 10:35:14.711142] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:27:50.384 [2024-07-15 10:35:14.711155] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:50.384 [2024-07-15 10:35:14.711162] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:50.384 10:35:14 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.385 1+0 records in 00:27:50.385 1+0 records out 00:27:50.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027008 s, 15.2 MB/s 00:27:50.385 10:35:14 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:50.385 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.643 1+0 records in 00:27:50.643 1+0 records out 00:27:50.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022738 s, 18.0 MB/s 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.643 1+0 records in 00:27:50.643 1+0 records out 00:27:50.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277407 s, 14.8 MB/s 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:50.643 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.902 1+0 records in 00:27:50.902 1+0 records out 00:27:50.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336004 s, 12.2 MB/s 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:27:50.902 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd0", 00:27:51.161 "bdev_name": "crypto_ram" 00:27:51.161 }, 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd1", 00:27:51.161 "bdev_name": "crypto_ram1" 00:27:51.161 }, 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd2", 00:27:51.161 "bdev_name": "crypto_ram2" 00:27:51.161 }, 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd3", 00:27:51.161 "bdev_name": "crypto_ram3" 00:27:51.161 } 00:27:51.161 ]' 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd0", 00:27:51.161 "bdev_name": "crypto_ram" 00:27:51.161 }, 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd1", 00:27:51.161 "bdev_name": "crypto_ram1" 00:27:51.161 }, 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd2", 00:27:51.161 "bdev_name": "crypto_ram2" 00:27:51.161 }, 00:27:51.161 { 00:27:51.161 "nbd_device": "/dev/nbd3", 00:27:51.161 "bdev_name": "crypto_ram3" 00:27:51.161 } 00:27:51.161 ]' 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.161 10:35:15 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:51.419 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:51.419 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.420 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.678 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:51.936 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:52.195 10:35:16 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:27:52.454 /dev/nbd0 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.454 1+0 records in 00:27:52.454 1+0 records out 00:27:52.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203524 s, 20.1 MB/s 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:27:52.454 /dev/nbd1 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.454 1+0 records in 00:27:52.454 1+0 records out 00:27:52.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286621 s, 14.3 MB/s 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:52.454 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:27:52.714 /dev/nbd10 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.714 1+0 records in 00:27:52.714 1+0 records out 00:27:52.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288278 s, 14.2 MB/s 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:52.714 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:27:52.974 /dev/nbd11 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:52.974 1+0 records in 00:27:52.974 1+0 records out 00:27:52.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002961 s, 13.8 MB/s 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.974 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd0", 00:27:53.233 "bdev_name": "crypto_ram" 00:27:53.233 }, 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd1", 00:27:53.233 "bdev_name": "crypto_ram1" 00:27:53.233 }, 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd10", 00:27:53.233 "bdev_name": "crypto_ram2" 00:27:53.233 }, 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd11", 00:27:53.233 "bdev_name": "crypto_ram3" 00:27:53.233 } 00:27:53.233 ]' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd0", 00:27:53.233 "bdev_name": "crypto_ram" 00:27:53.233 }, 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd1", 00:27:53.233 "bdev_name": "crypto_ram1" 00:27:53.233 }, 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd10", 00:27:53.233 "bdev_name": "crypto_ram2" 00:27:53.233 }, 00:27:53.233 { 00:27:53.233 "nbd_device": "/dev/nbd11", 00:27:53.233 "bdev_name": "crypto_ram3" 00:27:53.233 } 00:27:53.233 ]' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:53.233 /dev/nbd1 00:27:53.233 /dev/nbd10 00:27:53.233 /dev/nbd11' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:53.233 /dev/nbd1 00:27:53.233 /dev/nbd10 00:27:53.233 /dev/nbd11' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:53.233 256+0 records in 00:27:53.233 256+0 records out 00:27:53.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110175 s, 95.2 MB/s 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:53.233 256+0 records in 00:27:53.233 256+0 records out 00:27:53.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0519011 s, 20.2 MB/s 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:53.233 256+0 records in 00:27:53.233 256+0 records out 00:27:53.233 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0433494 s, 24.2 MB/s 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:53.233 10:35:17 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:27:53.493 256+0 records in 00:27:53.493 256+0 records out 00:27:53.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0384816 s, 27.2 MB/s 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:27:53.493 256+0 records in 00:27:53.493 256+0 records out 00:27:53.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025353 s, 41.4 MB/s 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:53.493 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.752 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:54.011 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:54.269 10:35:18 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:54.269 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:54.269 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:54.269 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:54.528 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:54.529 malloc_lvol_verify 00:27:54.529 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:54.787 e4affe77-3cf4-4000-9bcd-64f226e00eec 00:27:54.787 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:55.045 fcb5fb2d-b254-4059-a54a-7bcc96e64e2f 00:27:55.045 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:55.045 /dev/nbd0 00:27:55.045 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:55.045 mke2fs 1.46.5 (30-Dec-2021) 00:27:55.045 Discarding device blocks: 0/4096 done 00:27:55.045 Creating filesystem with 4096 1k blocks and 1024 inodes 00:27:55.045 00:27:55.045 Allocating group tables: 0/1 done 00:27:55.045 Writing inode tables: 0/1 done 00:27:55.045 Creating journal (1024 blocks): done 00:27:55.045 Writing superblocks and filesystem accounting information: 0/1 done 00:27:55.045 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:55.046 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:55.328 10:35:19 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 1951508 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 1951508 ']' 00:27:55.328 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 1951508 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1951508 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1951508' 00:27:55.329 killing process with pid 1951508 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@967 -- # kill 1951508 00:27:55.329 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@972 -- # wait 1951508 00:27:55.594 10:35:20 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:27:55.594 00:27:55.594 real 0m8.174s 00:27:55.594 user 0m10.281s 00:27:55.594 sys 0m3.135s 00:27:55.594 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.594 10:35:20 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:27:55.594 ************************************ 00:27:55.594 END TEST bdev_nbd 00:27:55.594 ************************************ 00:27:55.853 10:35:20 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:27:55.853 10:35:20 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:27:55.853 10:35:20 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = nvme ']' 00:27:55.853 10:35:20 blockdev_crypto_qat -- bdev/blockdev.sh@764 -- # '[' crypto_qat = gpt ']' 00:27:55.853 10:35:20 blockdev_crypto_qat -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:27:55.853 10:35:20 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:55.853 10:35:20 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.853 10:35:20 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:27:55.853 ************************************ 00:27:55.853 START TEST bdev_fio 00:27:55.853 ************************************ 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:27:55.853 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:27:55.853 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram]' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram1]' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram1 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram2]' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram2 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_crypto_ram3]' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=crypto_ram3 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:27:55.854 ************************************ 00:27:55.854 START TEST bdev_fio_rw_verify 00:27:55.854 ************************************ 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:55.854 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:56.128 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:56.128 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:56.129 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:56.129 10:35:20 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:27:56.388 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:56.388 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:56.388 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:56.388 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:56.388 fio-3.35 00:27:56.388 Starting 4 threads 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:56.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.388 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:56.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:56.389 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:11.240 00:28:11.240 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1953633: Mon Jul 15 10:35:33 2024 00:28:11.240 read: IOPS=30.6k, BW=120MiB/s (125MB/s)(1196MiB/10001msec) 00:28:11.240 slat (usec): min=11, max=551, avg=45.30, stdev=33.32 00:28:11.240 clat (usec): min=18, max=2505, avg=253.11, stdev=182.02 00:28:11.240 lat (usec): min=44, max=2546, avg=298.41, stdev=201.35 00:28:11.240 clat percentiles (usec): 00:28:11.240 | 50.000th=[ 196], 99.000th=[ 898], 99.900th=[ 1106], 99.990th=[ 1369], 00:28:11.240 | 99.999th=[ 2089] 00:28:11.240 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(1282MiB/9753msec); 0 zone resets 00:28:11.240 slat (usec): min=13, max=896, avg=53.95, stdev=33.13 00:28:11.240 clat (usec): min=14, max=1677, avg=283.25, stdev=187.82 00:28:11.240 lat (usec): min=46, max=1920, avg=337.20, stdev=206.43 00:28:11.240 clat percentiles (usec): 00:28:11.240 | 50.000th=[ 235], 99.000th=[ 955], 99.900th=[ 1156], 99.990th=[ 1287], 00:28:11.240 | 99.999th=[ 1565] 00:28:11.240 bw ( KiB/s): min=113776, max=171472, per=97.70%, avg=131496.84, stdev=3967.82, samples=76 00:28:11.240 iops : min=28444, max=42868, avg=32874.21, stdev=992.01, samples=76 00:28:11.240 lat (usec) : 20=0.01%, 50=0.10%, 100=11.20%, 250=48.04%, 500=30.14% 00:28:11.240 lat (usec) : 750=7.14%, 1000=2.83% 00:28:11.240 lat (msec) : 2=0.56%, 4=0.01% 00:28:11.241 cpu : usr=99.69%, sys=0.00%, ctx=66, majf=0, minf=239 00:28:11.241 IO depths : 1=1.7%, 2=28.1%, 4=56.2%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.241 complete : 0=0.0%, 4=87.7%, 8=12.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.241 issued rwts: total=306106,328159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.241 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:11.241 00:28:11.241 Run status group 0 (all jobs): 00:28:11.241 READ: bw=120MiB/s (125MB/s), 120MiB/s-120MiB/s (125MB/s-125MB/s), io=1196MiB (1254MB), run=10001-10001msec 00:28:11.241 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1282MiB (1344MB), run=9753-9753msec 00:28:11.241 00:28:11.241 real 0m13.293s 00:28:11.241 user 0m51.114s 00:28:11.241 sys 0m0.409s 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:11.241 ************************************ 00:28:11.241 END TEST bdev_fio_rw_verify 00:28:11.241 ************************************ 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "76603e26-c962-56fd-a73b-c17c47642a23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "76603e26-c962-56fd-a73b-c17c47642a23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "dddf5f2d-c9f4-50be-8850-a46dcedbd9cd"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dddf5f2d-c9f4-50be-8850-a46dcedbd9cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "9a877bf7-83b0-5ba1-93fc-7fbc16452a48"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9a877bf7-83b0-5ba1-93fc-7fbc16452a48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f48f6f93-a63a-555c-be96-c586d87da99b"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f48f6f93-a63a-555c-be96-c586d87da99b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n crypto_ram 00:28:11.241 crypto_ram1 00:28:11.241 crypto_ram2 00:28:11.241 crypto_ram3 ]] 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:11.241 10:35:33 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "76603e26-c962-56fd-a73b-c17c47642a23"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "76603e26-c962-56fd-a73b-c17c47642a23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "dddf5f2d-c9f4-50be-8850-a46dcedbd9cd"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "dddf5f2d-c9f4-50be-8850-a46dcedbd9cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "9a877bf7-83b0-5ba1-93fc-7fbc16452a48"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9a877bf7-83b0-5ba1-93fc-7fbc16452a48",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "f48f6f93-a63a-555c-be96-c586d87da99b"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f48f6f93-a63a-555c-be96-c586d87da99b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:28:11.241 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:11.241 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram]' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram1]' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram1 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram2]' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram2 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_crypto_ram3]' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=crypto_ram3 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:11.242 ************************************ 00:28:11.242 START TEST bdev_fio_trim 00:28:11.242 ************************************ 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:11.242 10:35:34 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:28:11.242 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:11.242 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:11.242 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:11.242 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:11.242 fio-3.35 00:28:11.242 Starting 4 threads 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:11.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:11.242 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:23.438 00:28:23.438 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1956048: Mon Jul 15 10:35:47 2024 00:28:23.438 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(1524MiB/10001msec); 0 zone resets 00:28:23.438 slat (usec): min=11, max=420, avg=59.86, stdev=29.33 00:28:23.438 clat (usec): min=27, max=1467, avg=216.70, stdev=118.18 00:28:23.438 lat (usec): min=38, max=1518, avg=276.56, stdev=134.46 00:28:23.438 clat percentiles (usec): 00:28:23.438 | 50.000th=[ 196], 99.000th=[ 545], 99.900th=[ 627], 99.990th=[ 693], 00:28:23.438 | 99.999th=[ 1352] 00:28:23.438 bw ( KiB/s): min=149280, max=182880, per=100.00%, avg=156218.95, stdev=1794.66, samples=76 00:28:23.438 iops : min=37320, max=45720, avg=39054.74, stdev=448.67, samples=76 00:28:23.438 trim: IOPS=39.0k, BW=152MiB/s (160MB/s)(1524MiB/10001msec); 0 zone resets 00:28:23.438 slat (usec): min=4, max=102, avg=16.17, stdev= 6.23 00:28:23.438 clat (usec): min=12, max=1518, avg=276.69, stdev=134.47 00:28:23.438 lat (usec): min=20, max=1533, avg=292.86, stdev=136.31 00:28:23.438 clat percentiles (usec): 00:28:23.438 | 50.000th=[ 249], 99.000th=[ 652], 99.900th=[ 750], 99.990th=[ 824], 00:28:23.438 | 99.999th=[ 1385] 00:28:23.438 bw ( KiB/s): min=149280, max=182880, per=100.00%, avg=156218.95, stdev=1794.79, samples=76 00:28:23.438 iops : min=37320, max=45720, avg=39054.63, stdev=448.74, samples=76 00:28:23.438 lat (usec) : 20=0.01%, 50=0.83%, 100=8.32%, 250=49.97%, 500=35.14% 00:28:23.438 lat (usec) : 750=5.70%, 1000=0.05% 00:28:23.438 lat (msec) : 2=0.01% 00:28:23.438 cpu : usr=99.67%, sys=0.00%, ctx=138, majf=0, minf=106 00:28:23.438 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.438 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.438 issued rwts: total=0,390147,390148,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.438 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:23.438 00:28:23.438 Run status group 0 (all jobs): 00:28:23.438 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1524MiB (1598MB), run=10001-10001msec 00:28:23.438 TRIM: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1524MiB (1598MB), run=10001-10001msec 00:28:23.438 00:28:23.438 real 0m13.273s 00:28:23.438 user 0m50.859s 00:28:23.438 sys 0m0.428s 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:28:23.438 ************************************ 00:28:23.438 END TEST bdev_fio_trim 00:28:23.438 ************************************ 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:28:23.438 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:28:23.438 00:28:23.438 real 0m26.921s 00:28:23.438 user 1m42.150s 00:28:23.438 sys 0m1.038s 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:23.438 ************************************ 00:28:23.438 END TEST bdev_fio 00:28:23.438 ************************************ 00:28:23.438 10:35:47 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:23.438 10:35:47 blockdev_crypto_qat -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.438 10:35:47 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:23.438 10:35:47 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:28:23.438 10:35:47 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.438 10:35:47 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:23.438 ************************************ 00:28:23.438 START TEST bdev_verify 00:28:23.438 ************************************ 00:28:23.438 10:35:47 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:23.438 [2024-07-15 10:35:47.510584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:23.438 [2024-07-15 10:35:47.510629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957691 ] 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:23.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.438 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:23.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:23.439 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:23.439 [2024-07-15 10:35:47.602002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:23.439 [2024-07-15 10:35:47.674313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.439 [2024-07-15 10:35:47.674316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.439 [2024-07-15 10:35:47.695274] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:23.439 [2024-07-15 10:35:47.703302] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:23.439 [2024-07-15 10:35:47.711321] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:23.439 [2024-07-15 10:35:47.812857] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:25.336 [2024-07-15 10:35:49.948926] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:25.336 [2024-07-15 10:35:49.948995] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:25.336 [2024-07-15 10:35:49.949023] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:25.336 [2024-07-15 10:35:49.956943] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:25.336 [2024-07-15 10:35:49.956957] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:25.336 [2024-07-15 10:35:49.956965] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:25.336 [2024-07-15 10:35:49.964963] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:25.336 [2024-07-15 10:35:49.964976] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:25.336 [2024-07-15 10:35:49.964984] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:25.336 [2024-07-15 10:35:49.972984] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:25.336 [2024-07-15 10:35:49.972997] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:25.336 [2024-07-15 10:35:49.973005] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:25.336 Running I/O for 5 seconds... 00:28:30.632 00:28:30.632 Latency(us) 00:28:30.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.632 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x0 length 0x1000 00:28:30.632 crypto_ram : 5.04 688.80 2.69 0.00 0.00 185310.79 717.62 122473.68 00:28:30.632 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x1000 length 0x1000 00:28:30.632 crypto_ram : 5.04 711.28 2.78 0.00 0.00 179547.46 8074.04 119957.09 00:28:30.632 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x0 length 0x1000 00:28:30.632 crypto_ram1 : 5.04 691.78 2.70 0.00 0.00 184303.95 501.35 112407.35 00:28:30.632 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x1000 length 0x1000 00:28:30.632 crypto_ram1 : 5.04 711.18 2.78 0.00 0.00 179252.84 9646.90 109890.76 00:28:30.632 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x0 length 0x1000 00:28:30.632 crypto_ram2 : 5.03 5443.06 21.26 0.00 0.00 23368.42 4482.66 18245.22 00:28:30.632 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x1000 length 0x1000 00:28:30.632 crypto_ram2 : 5.03 5627.05 21.98 0.00 0.00 22606.81 5111.81 17511.22 00:28:30.632 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x0 length 0x1000 00:28:30.632 crypto_ram3 : 5.04 5452.34 21.30 0.00 0.00 23289.60 540.67 18350.08 00:28:30.632 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:30.632 Verification LBA range: start 0x1000 length 0x1000 00:28:30.632 crypto_ram3 : 5.04 5643.42 22.04 0.00 0.00 22503.59 1448.35 17406.36 00:28:30.632 =================================================================================================================== 00:28:30.632 Total : 24968.91 97.53 0.00 0.00 40823.54 501.35 122473.68 00:28:30.632 00:28:30.632 real 0m7.950s 00:28:30.632 user 0m15.235s 00:28:30.632 sys 0m0.294s 00:28:30.632 10:35:55 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.632 10:35:55 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:30.632 ************************************ 00:28:30.632 END TEST bdev_verify 00:28:30.632 ************************************ 00:28:30.889 10:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:30.889 10:35:55 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:30.889 10:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:28:30.889 10:35:55 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.889 10:35:55 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:30.889 ************************************ 00:28:30.889 START TEST bdev_verify_big_io 00:28:30.889 ************************************ 00:28:30.889 10:35:55 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:30.889 [2024-07-15 10:35:55.551083] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:30.889 [2024-07-15 10:35:55.551132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1959014 ] 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.889 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:30.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:30.890 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:30.890 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:30.890 [2024-07-15 10:35:55.644961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:31.147 [2024-07-15 10:35:55.715316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.147 [2024-07-15 10:35:55.715318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.147 [2024-07-15 10:35:55.736390] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:31.147 [2024-07-15 10:35:55.744419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:31.147 [2024-07-15 10:35:55.752437] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:31.147 [2024-07-15 10:35:55.854865] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:33.673 [2024-07-15 10:35:57.990967] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:33.673 [2024-07-15 10:35:57.991052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:33.673 [2024-07-15 10:35:57.991063] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.673 [2024-07-15 10:35:57.998990] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:33.673 [2024-07-15 10:35:57.999003] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:33.673 [2024-07-15 10:35:57.999011] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.673 [2024-07-15 10:35:58.007022] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:33.673 [2024-07-15 10:35:58.007034] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:33.673 [2024-07-15 10:35:58.007042] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.673 [2024-07-15 10:35:58.015030] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:33.673 [2024-07-15 10:35:58.015041] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:33.673 [2024-07-15 10:35:58.015048] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:33.673 Running I/O for 5 seconds... 00:28:33.997 [2024-07-15 10:35:58.534618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.534910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.534979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.535023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.535052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.535090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.535318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.535333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.537961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.537994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.538837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.541923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.542257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.542269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.544739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.544771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.544798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.544824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.545167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.545197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.545224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.545251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.545555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.545574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.547920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.547950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.547994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.548759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.551824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.552139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.552151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.554977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.555003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.555308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.555320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.557625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.557682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.557711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.557738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.558115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.558144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.558171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.558197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.558499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.558511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.560795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.560826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.560853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.560880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.561242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.561273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.561302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.561331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.561636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.561647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.564059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.564090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.564116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.564142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.564463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.997 [2024-07-15 10:35:58.564502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.564529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.564567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.564843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.564854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.567824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.568093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.568105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.570777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.570818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.570853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.570882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.571174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.571203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.571229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.571256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.571517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.571528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.573914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.573944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.573982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.574796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.577974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.580955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.583920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.586974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.589947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.592950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.595993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.596005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.598947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.599259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.599270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.601913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.602208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.602221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.604817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.998 [2024-07-15 10:35:58.605116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.605127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.607896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.608234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.608246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.610778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.611085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.611097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.613820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.615776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.615809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.615837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.615867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.616191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.616220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.616259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.616303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.616593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.616604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.618779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.618812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.618839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.618864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.619059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.619092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.619118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.619148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.619317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.619327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.620980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.621006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.621173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.621183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.623791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.625544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.627998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.628008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.630025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.630810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.631697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.632578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.633126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.633378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.633627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.633876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.634093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.634104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.636010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.636757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.637634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.638509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.638980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.639232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.639480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.639729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.639974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.639985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.641715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.642458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.643338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.644217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.644650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.644909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.645159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.645408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.645675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.645686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.647266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.648012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.648894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.649773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.650205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.650455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.650705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.650959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.651270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.651282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.652795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:33.999 [2024-07-15 10:35:58.653609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.654547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.655467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.656019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.656272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.656523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.656774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.657080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.657092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.658691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.659691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.660612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.661503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.662246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.662510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.662765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.663031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.663334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.663346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.665065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.666034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.666878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.667754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.668520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.668786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.669041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.669302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.669618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.669630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.671355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.672285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.673100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.673978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.674803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.675067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.675324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.675594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.675908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.675921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.677725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.678597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.679338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.680212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.681091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.681345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.681600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.681850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.682167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.682179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.684038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.684851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.685589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.686485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.687470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.687724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.687976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.688229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.688538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.688550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.690534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.691255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.692010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.692893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.693976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.694231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.694483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.694733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.695054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.695066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.697157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.697754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.698489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.699344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.700506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.700763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.701014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.701271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.701579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.701591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.703741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.704281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.705012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.705877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.706988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.707250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.707502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.707752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.708080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.708092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.710137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.710569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.711300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.712157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.713192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.713520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.713772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.714026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.714358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.714370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.716461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.716883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.717789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.718751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.719822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.720445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.720704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.720970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.721263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.721274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.723558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.724348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.725105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.725848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.726993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.000 [2024-07-15 10:35:58.727970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.728231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.728493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.728776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.728787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.731735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.732806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.733778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.734699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.735319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.735580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.735846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.736107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.736288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.736299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.738053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.738804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.739660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.740536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.740986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.741246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.741500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.741759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.742041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.742053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.743536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.744455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.745423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.746343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.747126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.747394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.747649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.747909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.748221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.748233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.750176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.750849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.751569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.752453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.753633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.753886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.754140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.754393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.754701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.754713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.756760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.757226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.757959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.758851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.759917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.760331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.760590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.760848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.761207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.761220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.763319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.763882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.764839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.765590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.766633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.767367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.767621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.767872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.768166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.768178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.770379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.771187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.771867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.772579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.773703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.774660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.774917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.775169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.775425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.775436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.777701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.778596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.779018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.779740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.001 [2024-07-15 10:35:58.780804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.781687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.782097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.782360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.782675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.782690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.785119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.786018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.786560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.787559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.788691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.789570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.790284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.790542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.790847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.790861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.793201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.794092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.794899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.795631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.796792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.797730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.798704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.798968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.799281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.799293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.801728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.802674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.803636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.804198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.805340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.806253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.807092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.807449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.807788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.807800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.810152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.811031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.811891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.812330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.813249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.814115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.815038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.815480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.815788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.815799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.818110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.818976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.819837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.820448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.821348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.822204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.823066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.823794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.824105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.824116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.826492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.827294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.828063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.828611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.829762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.830660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.830963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.831239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.831550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.831562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.833650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.833910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.834162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.834192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.834683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.834943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.835195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.835445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.835750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.835763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.837573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.837830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.263 [2024-07-15 10:35:58.838109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.838387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.838420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.838775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.839043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.839305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.839555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.839812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.840073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.840085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.841828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.841862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.841904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.841931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.842619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.844984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.845013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.845291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.845302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.846941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.846972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.846999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.847684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.849857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.850177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.850189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.851853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.851899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.851941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.851970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.852724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.854834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.855138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.855151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.856836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.856878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.856920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.856947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.857696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.859859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.860157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.860169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.861781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.264 [2024-07-15 10:35:58.861814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.861840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.861866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.862648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.864820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.865135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.865147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.866687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.866720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.866747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.866785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.867551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.869851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.870119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.870132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.871787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.871817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.871846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.871873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.872586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.874764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.875065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.875077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.876858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.876888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.876921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.876949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.877583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.879875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.880186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.880197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.882991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.883002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.884732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.884772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.265 [2024-07-15 10:35:58.884801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.884828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.885547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.887659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.888001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.888013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.889711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.889756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.889798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.889837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.890521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.892980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.894682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.894712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.894740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.894766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.895487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.897683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.898016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.898030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.899646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.899678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.899708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.899737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.900472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.902993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.904707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.905025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.905037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.906616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.906646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.266 [2024-07-15 10:35:58.906672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.906698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.906872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.906916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.906943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.906969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.906995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.907166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.907177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.908949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.911613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.912708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.912741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.912771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.912797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.912975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.913006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.913040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.913070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.913096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.913268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.913279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.914801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.914832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.915686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.916763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.916794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.916820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.917533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.917717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.917759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.917792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.917818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.917844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.918019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.918030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.919813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.920165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.920995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.922014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.922193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.923058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.923548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.924516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.925351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.925527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.925538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.927163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.927417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.928240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.928959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.929137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.930012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.930808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.267 [2024-07-15 10:35:58.931491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.932223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.932398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.932409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.934027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.934285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.934827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.935516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.935693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.936585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.937443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.937836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.938619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.938794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.938805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.940243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.940495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.940743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.941719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.941926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.942824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.943678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.944308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.945165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.945387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.945398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.946743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.946998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.947250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.947899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.948112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.949047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.950009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.950988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.951523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.951704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.951718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.953013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.953266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.953517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.953803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.953984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.954699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.955550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.956388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.956888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.957092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.957104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.958343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.958597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.958847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.959101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.959355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.960122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.961017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.961940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.962890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.963151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.963162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.964716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.964990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.965240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.965492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.965803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.966690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.967412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.968260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.969117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.969332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.969343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.971256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.971512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.971761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.972015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.972344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.972893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.973613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.974461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.975312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.975489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.975500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.977483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.977919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.978173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.978425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.978738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.978999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.979943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.980835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.981707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.981887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.981899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.983907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.984715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.984977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.985234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.985510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.985774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.986357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.987075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.987917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.268 [2024-07-15 10:35:58.988093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.988105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.990088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.990987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.991353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.991603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.991921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.992198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.992456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.993398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.994147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.994323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.994334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.996237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.997091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.997979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.998232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.998541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.998796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.999051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:58.999522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.000226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.000402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.000413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.002199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.003042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.003894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.004403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.004743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.005008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.005260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.005513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.006429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.006634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.006645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.008419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.009280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.010143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.010962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.011241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.011498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.011745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.011999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.012585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.012813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.012824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.014852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.015686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.016552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.017394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.017644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.017912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.018160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.018422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.018672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.018845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.018856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.020441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.021156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.022029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.022882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.023097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.023363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.023613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.023863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.024120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.024330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.024341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.025739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.026602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.027583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.028520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.028698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.029232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.029494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.029744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.029995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.030307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.030320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.032101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.032815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.033525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.034377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.034554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.035483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.035757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.036019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.036278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.036590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.036602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.038580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.038996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.039836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.040860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.041045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.041938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.042415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.269 [2024-07-15 10:35:59.042674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.042941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.043327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.043339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.045436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.046152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.046987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.047725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.270 [2024-07-15 10:35:59.047913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.048814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.049661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.049926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.050182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.050469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.050480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.052740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.053752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.054235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.054988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.055170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.056051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.056942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.057275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.057532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.057840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.057851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.060201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.061073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.061656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.062603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.062811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.063703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.064590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.065343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.065594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.065914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.065943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.068249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.069142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.069978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.070658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.070877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.071762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.072648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.073558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.073810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.074119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.074131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.076596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.077566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.078529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.079082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.079305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.080310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.081306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.082291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.082547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.082879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.082891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.085241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.086104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.086963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.087365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.087541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.088296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.089151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.090004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.090452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.090797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.090809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.093025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.093896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.094777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.095243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.095420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.096163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.097027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.097884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.098450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.098761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.098772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.101026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.101894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.102755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.103397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.103577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.532 [2024-07-15 10:35:59.104306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.105185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.106056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.106776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.107064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.107075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.109393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.110291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.111186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.111945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.112164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.112892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.113751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.114603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.115487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.115749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.115760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.118375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.119305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.120240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.120497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.120674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.121546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.122407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.122882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.123138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.123450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.123466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.125259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.125513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.125545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.125794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.126119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.126395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.126653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.126909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.127158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.127505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.127516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.129302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.129567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.129818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.129856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.130115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.130379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.130644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.130893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.131150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.131456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.131468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.133973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.135631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.135662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.135689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.135715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.136411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.138805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.140893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.141131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.533 [2024-07-15 10:35:59.141142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.142786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.142816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.142845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.142872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.143575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.145732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.146053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.146064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.147724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.147755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.147782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.147812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.148529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.150973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.152677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.152707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.152734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.152761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.153387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.155791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.157959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.158010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.158357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.158368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.160867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.161171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.161184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.162947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.534 [2024-07-15 10:35:59.162989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.163789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.165939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.166216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.166226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.167909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.167939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.167967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.167993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.168670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.170823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.171162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.171174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.172839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.172881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.172912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.172954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.173755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.175847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.176139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.176152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.177775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.177828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.177857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.177884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.178568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180139] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.180936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.182571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.182602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.182631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.182669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.182999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.183036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.535 [2024-07-15 10:35:59.183064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.183092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.183120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.183418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.183429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.185764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.186056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.186067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.187741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.187771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.187797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.187824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.188547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.190752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.191042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.191053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.192663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.192693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.192719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.192746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.193483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.194847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.194877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.194907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.194933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.195570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.536 [2024-07-15 10:35:59.197183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.197670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.198006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.198020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.199896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.200072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.200083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201138] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.201696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.203951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.204918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.204964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.205752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.205782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.205994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.206034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.206061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.206088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.206114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.206364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.206375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.207720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.207750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.207777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.208760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.210253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.211176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.212082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.212956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.213135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.213706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.213973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.214224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.214473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.214767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.214778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.216634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.217276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.217997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.218854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.219037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.220026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.220284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.220535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.220786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.221109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.221123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.223119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.223523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.224411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.225346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.225523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.226398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.226932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.227190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.227440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.227764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.537 [2024-07-15 10:35:59.227775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.229839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.230598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.231397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.232113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.232295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.233168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.233993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.234245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.234498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.234733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.234745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.236946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.237839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.238236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.238977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.239157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.240023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.240871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.241258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.241509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.241822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.241834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.244291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.245141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.245746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.246623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.246834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.247692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.248548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.249294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.249546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.249851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.249864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.252188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.253108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.254059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.254691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.254927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.255928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.256914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.257870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.258128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.258430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.258441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.260658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.261533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.262374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.262786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.262970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.263677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.264530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.265398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.265833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.266132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.266143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.268408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.269256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.270116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.270783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.270978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.271694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.272553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.273412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.274221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.274494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.274505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.276888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.277800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.278743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.279715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.280018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.280894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.281878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.282793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.283660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.283933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.283944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.286759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.287503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.288395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.289272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.289496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.290221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.538 [2024-07-15 10:35:59.290966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.291841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.292729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.292917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.292929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.295108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.295853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.296740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.297650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.297835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.298248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.299092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.300096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.301121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.301304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.301315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.303212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.303478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.304438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.305353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.305536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.306433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.307042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.307967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.308708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.308887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.308898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.310408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.310663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.311424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.312164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.312343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.313305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.314284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.314847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.315579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.315764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.539 [2024-07-15 10:35:59.315775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.317425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.317694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.317983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.318882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.319074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.319959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.320812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.321347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.322326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.322561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.322571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.323915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.324171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.324422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.325126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.325355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.326228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.327114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.328046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.328612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.328829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.328845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.330226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.330493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.330756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.331072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.331256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.332006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.332871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.333717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.334230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.334407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.334418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.335689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.335945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.336196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.336447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.336678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.337409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.338296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.339217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.340169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.340428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.340439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.341960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.342223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.342473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.342730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.343039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.343797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.344531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.345426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.346278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.346459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.346470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.348398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.348661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.348918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.349169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.349484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.349843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.350640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.351579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.352531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.352710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.352721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.354710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.355173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.355433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.355688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.356025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.356287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.357157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.357879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.358774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.358968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.358979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.360975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.361821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.362078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.362329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.802 [2024-07-15 10:35:59.362594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.362855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.363467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.364192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.365054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.365230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.365241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.367225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.368095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.368449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.368700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.369005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.369266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.369520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.370446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.371210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.371388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.371399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.373321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.374185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.375017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.375271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.375581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.375838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.376094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.376683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.377405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.377584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.377595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.379687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.380641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.381573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.381836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.382164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.382424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.382680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.382936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.383863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.384043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.384054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.385842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.386697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.387564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.388320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.388596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.388854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.389105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.389355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.390017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.390230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.390241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.392162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.393114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.394080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.395064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.395339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.395602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.395856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.396112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.396436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.396613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.396624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.398272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.399003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.399863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.400737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.400920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.401184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.401435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.401685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.401941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.402142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.402152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.403605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.404330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.405198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.406073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.406252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.406583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.406837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.407094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.407346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.407655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.407666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.409435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.410166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.410968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.411573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.411853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.412117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.412368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.412619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.413456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.413680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.413690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.415627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.416576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.416608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.417450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.417619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.417937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.803 [2024-07-15 10:35:59.418193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.418432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.418681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.418973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.418984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.420857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.421117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.421373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.421406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.421673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.421944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.422198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.422460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.422715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.422975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.422987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.424635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.424666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.424694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.424720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.425463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.427931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.429576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.429607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.429634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.429662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.429937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.429980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.430018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.430061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.430099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.430394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.430405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.432946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.434737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.434768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.434796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.434823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.435497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.437938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.439671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.439704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.439735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.439763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.440407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.442049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.442080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.442110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.442137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.442399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.804 [2024-07-15 10:35:59.442441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.442470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.442496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.442537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.442780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.442791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.444667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.444708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.444735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.444778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.445531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.447941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.448237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.448247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.450936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.452690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.452731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.452777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.452814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.453529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.455944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.457974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.458001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.458028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.458364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.458375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.460924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.462992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.463019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.463354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.463365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.464987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.465028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.465055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.465095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.465399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.465455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.805 [2024-07-15 10:35:59.465494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.465523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.465550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.465872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.465883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.467995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.468300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.468311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.469941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.469987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.470786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.472889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.473202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.473213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.474777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.474811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.474839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.474866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.475559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.477892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.479988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.480015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.480042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.480343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.480355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.481960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.481990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.482814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.484521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.806 [2024-07-15 10:35:59.484563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.484591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.484618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.484944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.484980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.485008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.485035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.485064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.485337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.485348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.487793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.489932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.490178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.490189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.491983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.492012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.492039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.492276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.492286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.494783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.496965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.498064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.498094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.498964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.498996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.499478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.501106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.501137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.501167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.501980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.502505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.504451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.505324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.506118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.506372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.506665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.506929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.507183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.507692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.508416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.508592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.508602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.510481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.511345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.512204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.807 [2024-07-15 10:35:59.512579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.512918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.513177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.513430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.513682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.514650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.514895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.514908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.516725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.517600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.518479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.519179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.519438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.519699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.519956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.520207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.520832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.521057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.521069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.523056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.524057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.524956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.525850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.526114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.526382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.526635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.526891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.527148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.527323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.527333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.529069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.529797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.530656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.531529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.531706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.531975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.532227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.532478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.532735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.532958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.532970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.534402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.535236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.536202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.537169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.537346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.537816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.538083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.538341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.538602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.538899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.538915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.540746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.541535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.542275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.543162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.543344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.544292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.544553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.544813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.545075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.545388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.545400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.547441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.547865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.548720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.549732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.549922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.550818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.551306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.551567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.551829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.552155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.552166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.554314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.555132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.555872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.556618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.556801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.557724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.558647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.558914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.559181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.559418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.559429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.561667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.562551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.562971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.563783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.563964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.564819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.565674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.566165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.566436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.566747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.566759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.569059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.569919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.570573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.571419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.808 [2024-07-15 10:35:59.571657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.572518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.573366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.574160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.574416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.574713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.574724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.577179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.578186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.579204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.579701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.579880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.580789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.581671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.582533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.582853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.583219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.583231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.585528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:34.809 [2024-07-15 10:35:59.586417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.587294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.587831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.588019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.588757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.589641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.590496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.591162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.591409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.591420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.593767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.594621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.595475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.596303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.596542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.597298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.598217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.599193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.600187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.600478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.600490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.603306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.604221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.605107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.605967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.606244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.607212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.608020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.608872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.609734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.609977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.609988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.612593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.613338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.614201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.615064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.615296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.616039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.616761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.617632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.618501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.618696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.618707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.620962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.621687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.622547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.623394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.623563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.624058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.070 [2024-07-15 10:35:59.624727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.625596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.626471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.626647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.626659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.628681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.629470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.630401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.631342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.631518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.631930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.632792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.633782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.634757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.634936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.634947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.636788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.637773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.638600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.639443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.639620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.640261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.641133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.641856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.642703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.642878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.642889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.644645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.645311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.646034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.646890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.647100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.647981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.648635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.649360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.650229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.650405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.650416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.652140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.652690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.653409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.654253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.654430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.655317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.655702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.656392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.657258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.657438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.657448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.659145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.659400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.660297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.661251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.661429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.662315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.662823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.663807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.664642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.664818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.664828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.666456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.666714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.667529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.668255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.668431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.669314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.670125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.670841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.671560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.671736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.671747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.673300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.673560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.674055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.674774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.674954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.675815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.676696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.677094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.677806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.677996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.678007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.679519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.679787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.680045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.681006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.681185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.682060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.682921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.683496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.684412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.684619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.684630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.686020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.686277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.686531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.687168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.687379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.688327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.689265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.690193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.690771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.690985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.690996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.692339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.692599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.692853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.693193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.693368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.694110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.694964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.695830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.696273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.696449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.696459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.697755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.698018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.698273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.698528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.698728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.699458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.700325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.701199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.701996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.702206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.702217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.703564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.703818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.704076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.704329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.704621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.705595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.706470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.707324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.708176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.071 [2024-07-15 10:35:59.708425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.708436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.710189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.710449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.710703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.710960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.711257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.711635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.712497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.713373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.714156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.714406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.714417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.715780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.716043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.716077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.716325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.716592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.717506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.718442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.719368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.720278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.720545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.720556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.722327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.722588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.722842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.722871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.723114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.723375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.723630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.723884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.724145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.724456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.724467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.726849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.728935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.729233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.729245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.730834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.730864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.730904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.730945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.731699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.733909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.734185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.734196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.735865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.735907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.735949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.735976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.736659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.738859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.739099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.739111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.740748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.740781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.740808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.740834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.741499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.743866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.745961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.746272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.746284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.072 [2024-07-15 10:35:59.747900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.747933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.747961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.747987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.748768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.750913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.751250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.751262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.752854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.752886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.752919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.752946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.753654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.755766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.756056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.756068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.757690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.757723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.757751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.757779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.758453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.759798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.759828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.759855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.759883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.760497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.762711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.764938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.765219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.765230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767140] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.767715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.769875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.770141] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.770152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.771962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.771996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.772618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.774820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.775092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.775104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.776921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.776956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.776983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.777576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.779993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.780004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.781820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.781856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.781886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.781917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.782096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.782131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.782158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.073 [2024-07-15 10:35:59.782183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.782210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.782437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.782448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.784893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.786696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.786731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.786758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.786785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.786962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.786999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.787029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.787055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.787081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.787316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.787327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.789183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.789218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.789245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.789271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.074 [2024-07-15 10:35:59.789453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:35:59.999158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.000163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.000220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.000519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.000822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.001216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.008207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.008262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.009179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.332 [2024-07-15 10:36:00.009228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:28:35.593 [2024-07-15 10:36:00.152723] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.152802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.153105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.153172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.153458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.153736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.155572] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.156457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.157478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.593 [2024-07-15 10:36:00.158441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.158800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.159100] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.159406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.159889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.160737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.160944] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.160964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.163018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.163963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.164267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.164566] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.165202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.166188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.167162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.168103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.168305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.168325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.169921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.170236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.170544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.170851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.171878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.172826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.173846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.174600] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.174840] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.174859] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.176411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.176712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.177470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.178282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.179414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.179943] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.180750] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.181702] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.181900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.181925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.183891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.184711] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.185648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.186586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.187785] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.188703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.189702] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.190565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.190890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.190917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.193255] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.194203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.195220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.195941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.197117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.198063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.198560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.198866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.199173] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.199194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.201431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.202073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.202967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.203958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.205067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.205379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.205682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.205995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.206421] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.206443] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.208504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.209496] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.210491] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.211013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.211651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.211960] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.212755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.213624] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.213828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.213847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.215999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.216859] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.217176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.217482] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.218308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.219171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.220153] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.221137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.221508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.221526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.222841] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.223161] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.223469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.223997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.225216] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.226203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.226843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.227787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.227995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.228014] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.594 [2024-07-15 10:36:00.229640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.229956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.230926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.231914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.233094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.233948] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.234811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.235833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.236041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.236060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.238500] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.239353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.240338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.241353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.242446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.243414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.244355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.244728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.245082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.245104] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.247485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.248432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.248969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.249785] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.250930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.251725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.252035] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.252332] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.252660] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.252680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.254442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.255209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.255668] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.255976] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.256542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.257297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.258095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.258599] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.258815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.258835] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.260663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.261629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.262546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.263116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.264298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.264603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.264907] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.265208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.265489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.265507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.267452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.267759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.268065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.268384] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.269428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.270158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.271137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.272058] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.272347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.272367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.274916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.275717] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.275766] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.275804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.275850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.276071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.277123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.277434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.277484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.277795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.278152] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.278175] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.280242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.280293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.280888] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.280951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.281324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.281378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.281677] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.281718] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.282036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.282082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.282280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.282302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.282322] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.283748] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.283798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.284114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.284156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.284520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.285057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.285102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.285885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.285935] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.286208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.286227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.286246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.287983] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.288036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.595 [2024-07-15 10:36:00.288344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.288386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.288591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.289642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.289696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.290544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.290589] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.290793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.290811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.290829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.293169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.293219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.294019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.294073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.294401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.295324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.295376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.296171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.296223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.296553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.296575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.296597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.299151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.299208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.300046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.300090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.300292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.300800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.300848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.301168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.301213] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.301518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.301537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.301560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.303799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.303861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.304771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.304818] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.305122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.305432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.305487] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.305784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.305827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.306077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.306095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.306113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.307811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.307873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.308176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.308226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.308536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.308842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.308885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.309785] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.309842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.310041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.310060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.310078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.311652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.311716] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.312020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.312066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.312303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313070] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313855] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.313892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.315733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.315783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.316687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.316744] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.317044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.317352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.317398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.317698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.317746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.318102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.318129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.318151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.320068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.320120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.320421] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.320472] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.320866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.321183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.321230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.321524] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.321590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.321973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.321995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.322018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.324004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.596 [2024-07-15 10:36:00.324052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.324349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.324399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.324665] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.324992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.325039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.325335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.325376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.325700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.325721] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.325742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.328061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.328117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.328416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.328460] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.328795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329531] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.329908] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.332031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.332083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.332383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.332429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.332698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333397] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.333773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.335693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.335745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.336064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.336111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.336383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.336705] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.336753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.337056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.337097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.337429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.337449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.337476] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.339323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.339373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.339418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.339461] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.339771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340439] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.340848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.342577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.342620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.342661] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.342707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343260] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343595] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343617] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.343641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.345983] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.346028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.346300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.346321] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.346341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348635] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.348676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.349045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.349066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.349084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.597 [2024-07-15 10:36:00.350561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.350609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.350666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.350715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351178] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351486] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.351504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.352686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.352735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.352789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.352829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353193] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.353682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.354972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355487] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355690] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.355740] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.357998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.358040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.358079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.358360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.358380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.358398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.359873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.359927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.359978] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360021] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.360842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362049] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362213] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362512] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362610] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.362700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.363039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.363061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.598 [2024-07-15 10:36:00.363090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.364964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.365004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.365218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.365239] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.365258] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367248] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367558] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367614] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.367734] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.368038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.368059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.368077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.369518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.369566] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.369617] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.369670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370233] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370456] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370476] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.370494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.371727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.371770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.371830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.371882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372251] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372385] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.372756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374149] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374283] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374888] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.374918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.376982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.377177] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.377197] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.377217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.378401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.378445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.378820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.378865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.378921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.379264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.599 [2024-07-15 10:36:00.379284] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.380769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.381487] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.381543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.382328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.382382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.382884] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.383107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.383127] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.383206] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.383504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.383581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.384563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.384924] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.384946] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.386264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.387017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.387101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.387545] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.387907] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.387932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.388007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.388298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.388363] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.388638] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.388828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.388846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.390047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.390640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.390709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.391095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.391433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.391454] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.391542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.392407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.392484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.392761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.392966] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.392985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.394152] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.394439] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.394505] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.394782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.395167] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.395190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.395272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.395696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.395760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.396527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.396740] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.396767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.397964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.398264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.398333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.399017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.399346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.399366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.399446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.399731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.399798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.400568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.400769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.400788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.402116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.402409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.402504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.402789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.403106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.403125] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.403203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.404005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.404071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.404936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.405260] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.405280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.406355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.407044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.407112] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.407409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.861 [2024-07-15 10:36:00.407752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.407772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.407847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.408649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.408713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.409008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.409205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.409225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.410375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.411217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.411285] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.412253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.412457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.412477] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.412554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.412843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.412917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.413204] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.413605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.413628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.414838] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.415800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.415870] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.416848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.417060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.417079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.417161] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.418138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.418205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.418878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.419087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.419107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.421002] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.421975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.422046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.423012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.423304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.423323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.423401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.424247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.424315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.425277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.425481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.425499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.427265] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.427998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.428066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.428962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.429166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.429185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.429264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.430202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.430273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.431259] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.431462] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.431483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.434646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.435184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.435260] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.435680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.435907] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.435929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.436007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.436974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.437039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.438001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.438297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.438316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.439378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.439672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.439743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.440033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.440376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.440398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.440476] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.441219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.441286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.442200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.442403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.442422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.445614] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.446550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.446622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.446909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.447175] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.447194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.447267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.862 [2024-07-15 10:36:00.447770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.447839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.448229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.448424] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.448442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.449563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.450365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.450431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.451343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.451538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.451557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.451619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.451923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.451966] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.452267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.452603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.452625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.455219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.456053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.457006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.457981] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.458217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.458236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.458291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.458751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.458795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.459177] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.459369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.459392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.461717] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.462522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.463557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.464537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.464736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.464755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.465672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.465980] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.466285] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.466584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.466834] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.466853] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.470189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.471175] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.471821] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.472302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.472625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.472648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.473603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.473914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.474520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.475343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.475538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.475558] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.477627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.478659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.478968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.479267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.479605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.479628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.479990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.480781] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.481712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.482656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.482931] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.482952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.486247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.487031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.487377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.487798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.488037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.488057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.489032] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.489975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.490591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.491496] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.491689] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.491707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.493387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.493701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.494743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.495768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.495969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.495989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.496971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.863 [2024-07-15 10:36:00.497810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.498660] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.499625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.499820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.499839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.503519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.504345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.505288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.506273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.506510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.506529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.507361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.508308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.508684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.508989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.509277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.509296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.511341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.512110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.512556] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.513292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.513609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.513632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.514330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.514755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.515112] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.515876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.516083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.516103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.518847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.519665] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.520514] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.521399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.521645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.521663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.522179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.523050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.523347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.524125] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.524426] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.524449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.526672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.527589] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.527896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.528200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.528522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.528541] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.528846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.529725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.530616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.531432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.531653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.531671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.534280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.535055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.535701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.536719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.536923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.536942] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.537265] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.537563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.537859] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.538212] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.538408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.538428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.540062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.540627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.540931] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.541914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.542250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.542275] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.542774] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.543536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.544196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.545198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.545394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.545412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.548142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.549110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.550037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.550592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.550812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.550831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.551157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.551996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.552300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.552836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.553064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.553083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.554515] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.554825] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.864 [2024-07-15 10:36:00.555129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.555852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.556115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.556135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.556781] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.557767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.558737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.559335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.559582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.559601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.561783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.562605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.563391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.563694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.564041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.564064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.564386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.564950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.565707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.566290] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.566483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.566502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.568111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.568871] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.569178] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.570133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.570329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.570348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.570836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.571606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.572377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.572680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.573031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.573054] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.576383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.577197] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.577534] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.577582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.577883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.577909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.578769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.579082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.579954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.580782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.581161] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.581180] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.582619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.582670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.582980] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.583026] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.583340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.583367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.584414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.584461] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.585207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.585253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.585531] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.585551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.589519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.589579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.589884] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.589939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.590272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.590291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.590599] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.590642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.590952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.590998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.591275] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.591302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.865 [2024-07-15 10:36:00.593001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.593056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.593357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.593403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.593683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.593707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.594027] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.594074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.594369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.594408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.594743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.594765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.596956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.597011] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.597312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.597359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.597612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.597641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.597962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.598010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.598304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.598345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.598680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.598701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.600310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.600358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.601382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.601424] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.601765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.601788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.602109] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.602156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.602458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.602512] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.602862] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.602883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.605202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.605257] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.606191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.606251] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.606614] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.606635] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.606962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.607016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.607326] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.607370] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.607707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.607728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.609641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.609691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.610427] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.610470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.610796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.610820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.611620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.611664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.611980] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.612025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.612336] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.612353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.614850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.614925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.615771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.615814] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.616146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.616168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.616845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.616887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.617201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.617244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.617548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.617568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.619466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.619518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.619818] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.619874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.620181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.620201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.621168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.621214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.621512] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.621555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.621752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.621770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.624771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.624824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.625133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.625176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.625373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.625391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.626263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.626315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.626989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.627039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.627281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.627299] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.629715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.866 [2024-07-15 10:36:00.629780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.630093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.630139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.630365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.630384] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.631151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.631197] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.631663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.631708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.631940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.631958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.635210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.635264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.635828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.635874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.636138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.636157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.636809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.636868] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.637771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.637815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.638156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.638179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.640320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.640370] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.640897] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.640952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.641166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.641184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.642188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.642237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.642553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.642599] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.642932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.642953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.646186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:35.867 [2024-07-15 10:36:00.646242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.647276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.647329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.647679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.647702] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.648206] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.648252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.648767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.648815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.649160] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.649183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.651235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.651292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.651596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.651641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.651987] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.652010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.652333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.652378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.653035] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.653080] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.653328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.653347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.655899] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.655958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.656000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.656042] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.656389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.656409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.656725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.656769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.657528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.657575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.657773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.657793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.658963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.659007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.659061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.659101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.659441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.659463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.129 [2024-07-15 10:36:00.659517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.659560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.659603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.659648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.659846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.659865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.662982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.663025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.663065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.663106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.663340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.663361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.664956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665424] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.665779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669718] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669738] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.669929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.670225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.670249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671424] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.671967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.672007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.672048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.672099] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.672398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.672419] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.674826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.675047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.675069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.676478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.676521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.676560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.676616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.676971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677003] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.677420] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680871] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.680962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.681155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.681184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.682565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.130 [2024-07-15 10:36:00.682612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.682654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.682695] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.682886] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.682912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.682972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.683022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.683064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.683102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.683299] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.683320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.686947] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687393] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.687790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.688971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689014] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689054] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689498] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.689845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692720] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692838] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.692927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.693227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.693249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.694866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.694929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.694971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695421] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.695632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698482] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698569] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698834] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.698967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.699301] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.699323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700404] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700451] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700855] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.700968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.701165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.701184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.131 [2024-07-15 10:36:00.704854] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.704893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.704941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.705135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.705154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.706351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.707343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.707387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.708334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.708606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.708625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.708688] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.709496] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.709543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.709845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.710068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.710099] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.712548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.713271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.713317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.714228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.714434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.714453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.714511] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.715473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.715517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.716409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.716761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.716784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.718540] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.719490] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.719537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.720481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.720675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.720694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.720749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.721300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.721344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.722174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.722373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.722391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.724927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.725240] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.725292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.726063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.726293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.726312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.726367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.727306] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.727356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.728263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.728520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.728538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.729639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.730640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.730696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.731004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.731252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.731270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.731323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.731822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.731868] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.732293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.732513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.732533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.735348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.736301] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.736347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.736998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.737260] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.737280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.737335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.737631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.737674] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.738548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.738899] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.738926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.740048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.740849] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.740892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.741844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.742048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.742067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.742125] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.743073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.743116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.743697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.743894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.743921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.745746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.746724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.746768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.747697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.132 [2024-07-15 10:36:00.748040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.748059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.748115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.748937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.748979] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.749914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.750113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.750131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.752551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.753271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.753316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.754186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.754390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.754409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.754466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.755403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.755448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.756386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.756592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.756611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.759830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.760533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.760579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.760877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.761087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.761105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.761159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.762072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.762117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.763094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.763361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.763381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.766437] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.766744] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.766789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.767751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.768112] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.768134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.768194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.768691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.768732] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.769548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.769749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.769768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.772829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.773481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.773526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.773858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.774214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.774236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.774291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.775038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.775081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.775380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.775578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.775596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.778231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.779182] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.779228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.779682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.779877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.779913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.779971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.780270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.780313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.781266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.781622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.781644] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.784147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.784974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.785020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.785992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.786189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.786211] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.786270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.786985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.787028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.787329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.787643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.787662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.790618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.791418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.791463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.792429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.792625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.792644] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.792701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.793639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.793683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.794305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.794502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.133 [2024-07-15 10:36:00.794520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.796366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.797318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.797963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.798943] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.799139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.799158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.799214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.799751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.799796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.800211] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.800539] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.800564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.804826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.805725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.806474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.806848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.807187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.807208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.808234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.808537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.809184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.809945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.810181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.810199] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.814627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.814938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.815681] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.816453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.816745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.816767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.817729] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.818747] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.819375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.819880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.820229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.820252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.824546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.825472] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.826194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.826597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.826928] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.826951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.828001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.828302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.828986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.829742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.829977] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.829996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.834155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.834464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.835305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.836110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.836435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.836454] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.837325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.838233] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.838970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.839352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.839679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.839703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.843666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.844445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.845313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.845615] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.845934] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.845954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.846880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.847218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.848011] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.848757] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.849036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.849057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.134 [2024-07-15 10:36:00.852628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.852944] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.853954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.854883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.855179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.855198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.855989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.856767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.857575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.857899] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.858250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.858269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.862055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.862773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.863675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.863988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.864297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.864316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.865200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.865496] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.866302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.867071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.867340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.867359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.870948] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.871256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.872242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.873237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.873599] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.873618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.874402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.875137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.876020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.876326] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.876649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.876668] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.880288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.880928] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.881959] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.882264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.882551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.882570] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.883328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.883632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.884591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.885477] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.885758] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.885776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.889119] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.889427] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.890338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.891307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.891614] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.891632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.892398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.893345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.893851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.894721] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.895042] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.895064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.898236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.899048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.899992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.900519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.900745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.900765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.901087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.901957] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.902262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.902798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.903031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.903050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.907604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.907945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.908254] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.908563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.908767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.908786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.909162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.909564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.910316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.910628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.910910] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.910932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.913309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.135 [2024-07-15 10:36:00.913625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.914489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.914800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.915141] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.915163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.916121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.916429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.916738] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.917055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.917259] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.917278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.919763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.920352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.920659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.921654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.922033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.922056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.922375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.922682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.923551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.923866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.924202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.924224] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.927138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.928004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.928318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.928626] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.928970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.928990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.929527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.930137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.930437] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.931465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.931848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.931871] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.935330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.935642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.935949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.936008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.936373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.936392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.937367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.937670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.938317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.938780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.939125] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.939147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.941223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.941278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.941576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.941623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.941894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.941922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.942806] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.942858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.943167] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.943213] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.943451] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.943469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.946044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.946104] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.946970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.947016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.947346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.947368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.947678] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.947724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.948033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.948082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.398 [2024-07-15 10:36:00.948288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.948309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.950796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.950848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.951730] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.951795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.952079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.952098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.952799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.952848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.953151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.953203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.953399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.953417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.956478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.956533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.957499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.957557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.957921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.957944] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.958354] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.958399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.958983] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.959031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.959362] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.959383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.963310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.963364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.963669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.963711] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.963926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.963946] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.964269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.964318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.964963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.965009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.965241] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.965261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.967771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.967826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.968403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.968451] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.968786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.968810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.969859] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.969930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.970860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.970916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.971119] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.971138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.973784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.973841] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.974576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.974620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.974864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.974883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.975420] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.975466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.976265] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.976312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.976557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.976579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.981578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.981641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.982522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.982582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.982784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.982803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.983606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.983653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.984141] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.984188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.984455] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.984474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.986700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.986754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.987555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.987603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.987848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.987877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.988847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.988913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.989220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.989265] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.989495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.989516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.993001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.993060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.993686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.993731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.994055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.994079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.994396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.994443] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.995324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.995370] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.995719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.995741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.999850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:00.999912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.000243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.000294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.000498] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.000518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.000832] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.000879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.001609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.399 [2024-07-15 10:36:01.001654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.001890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.001918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.004648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.004701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.005113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.005160] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.005484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.005504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.006349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.006396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.007143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.007190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.007387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.007408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.010104] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.010159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.011055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.011106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.011302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.011320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.011822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.011870] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.012630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.012674] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.012973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.012992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.017505] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.017559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.018299] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.018346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.018543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.018561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.019415] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.019466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.020056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.020102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.020416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.020436] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.022766] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.022820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.023612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.023658] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.023943] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.023962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.024953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.025000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.025299] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.025344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.025544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.025564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.028538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.028592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.028632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.028673] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.029024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.029046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.029356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.029400] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.030090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.030135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.030466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.030487] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.033522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.033576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.033621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.033662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.033996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034167] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.034422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036668] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036794] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.036879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.037079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.037098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.038930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.038978] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.039719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.042783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.042831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.042876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.042927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.043256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.043278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.043331] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.043377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.043417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.400 [2024-07-15 10:36:01.043458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.043657] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.043676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046088] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046128] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046410] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046595] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.046805] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.049805] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.050045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.050066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.052900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.052953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.052998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.053828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.055710] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.055760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.055805] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.055844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056161] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.056537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059070] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059490] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.059952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.062759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.062822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.062867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.062914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063204] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063287] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063336] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.063563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.066870] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.066924] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.066968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067450] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.067661] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.070790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.070839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.070889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.070940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071216] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.071686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.074898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.074953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.074993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075032] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.401 [2024-07-15 10:36:01.075492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.075734] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.075752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078933] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.078976] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.079021] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.079216] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.079234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082706] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.082913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.083174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.083195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.085583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.085629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.085672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086716] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.086911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.087108] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.087128] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.089251] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.090208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.090254] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.090818] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.091029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.091057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.091127] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.092031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.092076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.093015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.093278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.093298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.095655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.096598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.096644] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.097172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.097435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.097453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.097508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.098448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.098491] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.099432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.099719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.099738] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.102187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.103041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.103087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.103604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.103835] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.103853] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.103914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.104843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.104885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.105822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.106113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.106139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.108602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.109551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.109597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.110272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.110505] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.110523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.110579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.111522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.111565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.112535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.112825] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.112844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.115554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.116564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.116616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.402 [2024-07-15 10:36:01.117484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.117736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.117755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.117810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.118762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.118814] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.119124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.119446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.119468] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.122053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.122884] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.122938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.123242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.123573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.123598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.123661] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.123967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.124013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.124887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.125146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.125164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.127297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.127601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.127646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.128528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.128723] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.128742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.128802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.129509] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.129554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.130367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.130735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.130754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.133191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.134131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.134183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.135190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.135464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.135483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.135540] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.135839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.135882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.136199] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.136523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.136541] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.139347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.139652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.139707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.140012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.140274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.140292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.140346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.141120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.141165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.141634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.141841] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.141860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.144563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.145331] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.145378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.145962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.146212] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.146230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.146286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.146916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.146972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.147268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.147577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.147603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.149700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.150151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.150200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.150498] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.150764] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.150783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.150844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.151148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.151203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.152062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.152256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.152274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.155169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.156059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.156108] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.156852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.157059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.157078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.157134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.158029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.158081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.158391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.158709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.158731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.161167] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.162212] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.162266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.162573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.162916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.162940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.162994] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.163303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.163349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.164086] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.164331] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.164350] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.403 [2024-07-15 10:36:01.166739] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.167064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.167112] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.168082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.168291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.168310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.168368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.168874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.168927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.169716] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.169967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.169988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.172115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.172826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.172872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.173725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.174065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.174086] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.174146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.174451] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.174494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.174801] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.175139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.175162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.178162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.178479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.178786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.179274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.179515] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.179535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.179593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.180359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.180406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.181379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.181577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.404 [2024-07-15 10:36:01.181596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.184985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.185950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.186886] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.187501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.187753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.187771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.188728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.189305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.189605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.189900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.190235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.190256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.193203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.193511] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.193815] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.194124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.194521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.194543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.194856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.195168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.195478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.195786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.196170] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.196193] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.198735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.199044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.199351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.199664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.199993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.200015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.200326] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.200622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.200933] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.201243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.201556] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.201578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.204279] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.204588] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.204890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.205196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.205522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.205545] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.668 [2024-07-15 10:36:01.205850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.206178] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.206488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.207031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.207379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.207398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.209830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.210144] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.210449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.210747] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.211057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.211078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.211374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.211658] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.211969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.212274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.212571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.212592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.215051] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.215364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.215662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.215963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.216314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.216336] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.216643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.216958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.217785] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.218089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.218422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.218445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.222023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.222330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.222626] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.222932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.223234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.223253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.224041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.224875] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.225746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.226554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.226909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.226927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.230045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.230838] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.231162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.231465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.231761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.231780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.232097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.233123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.234069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.234693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.234945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.234965] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.236804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.237826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.238788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.239412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.239656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.239675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.240440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.240740] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.241041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.241376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.241598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.241620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.243097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.243404] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.243704] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.244061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.244267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.244288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.245307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.246018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.246778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.247242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.247592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.247616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.250024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.250835] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.251619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.251964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.252293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.252316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.252622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.252929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.253887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.254779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.255063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.255082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.256753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.257068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.258034] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.259050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.259334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.259352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.260140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.260858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.669 [2024-07-15 10:36:01.261165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.261465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.261799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.261822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.263852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.264454] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.264761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.265064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.265387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.265410] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.266170] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.266936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.267422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.268195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.268390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.268408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.270882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.271653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.272155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.272931] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.273127] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.273155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.273468] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.273766] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.274069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.274714] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.274953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.274972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.276451] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.276753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.277056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.277877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.278109] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.278128] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.278681] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.279603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.280546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.280848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.281185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.281212] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.283521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.283917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.284219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.284266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.284594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.284617] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.284934] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.285236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.285542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.286250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.286598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.286618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.288911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.288961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.289941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.289985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.290182] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.290200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.291167] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.291211] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.291692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.291733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.292071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.292094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.294430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.294479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.295428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.295479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.295781] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.295803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.296633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.296677] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.297620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.297663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.297860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.297878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.300192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.300242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.301068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.301111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.301310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.301328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.302357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.302402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.303278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.303328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.303526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.303544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.305179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.305225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.305521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.305564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.305769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.305789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.306755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.306800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.307742] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.307786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.308062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.670 [2024-07-15 10:36:01.308081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.309382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.309429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.309727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.309779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.310141] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.310163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.310632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.310674] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.311392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.311435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.311635] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.311653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.313687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.313744] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.314598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.314642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.314999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315372] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.315948] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.318030] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.318085] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.319039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.319083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.319278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.319297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.319970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.320026] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.320323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.320366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.320652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.320671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.322883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.322939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.323520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.323564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.323822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.323840] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.324792] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.324836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.325777] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.325831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.326153] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.326173] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.328598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.328646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.329589] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.329633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.329830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.329848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.330445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.330489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.331315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.331358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.331553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.331571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.333334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.333388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.334363] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.334421] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.334616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.334634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.335656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.335700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.336440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.336484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.336680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.336699] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.338084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.338135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.338436] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.338480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.338804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.338822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.339750] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.339796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.340739] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.340782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.340983] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.341002] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.343018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.343078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.343375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.343419] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.343750] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.343771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.344083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.344129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.344752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.344793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.345037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.345055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.671 [2024-07-15 10:36:01.347009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.347062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.348982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.349024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.349363] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.349384] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.351019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.351069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.351872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.351921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.352120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.352139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.353088] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.353133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.353432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.353473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.353797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.353820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.356150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.356197] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.357126] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.357182] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.357479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.357497] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.358320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.358364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.359303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.359347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.359541] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.359559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.361830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.361877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.361923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.361965] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.362159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.362177] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.363132] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.363181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.364057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.364100] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.364293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.364312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365419] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365545] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365886] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365944] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.365985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.366039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.366080] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.366425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.366448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.367621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.367666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.367709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.367754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.367954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.367977] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.368032] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.368079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.368116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.368156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.368389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.368409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.370183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.370230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.370272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.370312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.672 [2024-07-15 10:36:01.370535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.370554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.370609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.370648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.370687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.370727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.371061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.371081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.372519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.372565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.372613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.372659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.372998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.373460] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.374601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.374644] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.374682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.374741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375570] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.375591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377426] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377445] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377638] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377834] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.377854] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379647] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.379971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.380019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.380058] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.380097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.380340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.380361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.381727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.381773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.381820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.381864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382287] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382678] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.382697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.383878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.383928] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.383968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384472] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384835] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.384858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386477] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.386972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.387170] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.387190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.388896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.388949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.673 [2024-07-15 10:36:01.388995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389248] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.389653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.390975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391404] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391423] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.391958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.393824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.394159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.394184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.395610] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.395651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.395690] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.395738] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396011] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396030] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396177] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396224] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396404] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.396422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398607] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.398919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.400239] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.400286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.400335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.400632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.400971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.400992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.401053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.401096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.401139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.401181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.401374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.401395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.402550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.402856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.402912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.403218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.403569] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.403590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.403650] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.404330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.404373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.405149] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.405420] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.405438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.406916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.407228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.407274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.407606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.407810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.407832] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.407911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.408801] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.408851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.409912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.410116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.674 [2024-07-15 10:36:01.410134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.411861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.412526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.412572] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.413373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.413643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.413663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.413723] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.414535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.414582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.415234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.415595] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.415618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.417168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.418219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.418272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.419107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.419338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.419356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.419414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.420392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.420440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.420750] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.421093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.421116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.422631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.423192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.423238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.424105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.424305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.424323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.424380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.425362] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.425408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.425716] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.426053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.426076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.427548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.428550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.428597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.429639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.429925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.429945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.430000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.430807] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.430848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.431786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.432125] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.432153] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.433887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.434711] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.434754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.435697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.436052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.436071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.436129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.436440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.436482] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.436780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.437143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.437164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.438787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.439093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.439136] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.439434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.439704] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.439724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.439783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.440097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.440147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.440443] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.440743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.440762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.442366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.442665] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.442706] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.443009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.443292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.443311] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.443371] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.443667] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.443712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.675 [2024-07-15 10:36:01.444019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.444330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.444348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.446008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.446309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.446351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.446652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.446969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.446989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.447074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.447385] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.447430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.447733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.448077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.448100] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.449832] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.450145] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.450195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.450505] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.450844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.450868] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.450927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.451235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.676 [2024-07-15 10:36:01.451295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.451603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.451954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.451977] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.453634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.453949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.454005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.454300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.454631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.454652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.938 [2024-07-15 10:36:01.454705] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.455009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.455062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.455359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.455690] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.455712] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.457390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.457689] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.457735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.458047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.458391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.458413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.458468] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.458766] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.458814] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.459124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.459491] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.459512] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.461241] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.461549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.461593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.461890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.462266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.462288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.462344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.462640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.462683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.463399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.463589] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.463608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.465231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.465535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.465579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.466419] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.466651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.466671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.466728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.467230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.467276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.468083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.468296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.468315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.470155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.471095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.472111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.472777] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.473027] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.473046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.473114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.473639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.473693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.473996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.474277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.474296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.476367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.477141] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.477489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.477787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.478082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.478102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.478412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.479402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.480330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.480910] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.481142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.481160] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.483020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.483963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.484961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.485650] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.485873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.485892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.486593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.486892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.487194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.487492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.487709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.487727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.489417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.489722] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.490026] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.490324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.490518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.490536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.491367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.491850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.492613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.493300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.493606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.939 [2024-07-15 10:36:01.493623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.495950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.496436] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.497204] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.497872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.498218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.498240] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.498546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.498842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.499522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.500295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.500558] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.500578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.502217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.502519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.503345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.504118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.504382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.504405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.505324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.506296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.506595] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.506894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.507215] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.507234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.509516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.510484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.510786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.511091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.511435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.511456] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.511761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.512606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.513520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.514291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.514519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.514537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.516388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.517325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.518347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.518949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.519172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.519191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.520007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.520308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.520610] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.520911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.521165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.521187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.522982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.523294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.523594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.523895] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.524140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.524159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.524936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.525493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.526418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.526729] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.527072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.527095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.529129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.529433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.529737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.530043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.530371] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.530393] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.530708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.531259] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.531826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.532148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.532471] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.532491] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.534918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.535851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.536790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.537276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.537640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.537662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.537987] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.538286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.539121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.540067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.540262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.540281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.542365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.543059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.543373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.543669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.544017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.544039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.544745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.545561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.940 [2024-07-15 10:36:01.546529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.547472] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.547732] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.547749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.549068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.549374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.549676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.550242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.550464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.550482] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.551441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.552388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.552913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.553725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.553927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.553945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.555523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.555822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.556673] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.557585] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.557780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.557798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.558626] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.559551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.560467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.561486] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.561680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.561698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.564110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.564929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.565862] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.566797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.567096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.567115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.567953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.568880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.569808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.570264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.570616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.570637] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.573034] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.573979] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.574541] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.575411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.575604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.575622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.576573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.577377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.577675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.577988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.578330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.578353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.580518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.581272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.582086] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.582130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.582325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.582343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.583294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.583595] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.583892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.584227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.584559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.584577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.586176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.586225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.587040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.587083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.587284] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.587302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.588297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.588347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.588645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.588693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.589055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.589077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.591506] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.591563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.592450] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.592495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.592693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.592711] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.593543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.593586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.594543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.594587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.594831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.594850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.597603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.597656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.598608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.598652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.598848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.598867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.941 [2024-07-15 10:36:01.599565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.599612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.600436] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.600480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.600676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.600694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.602291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.602341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.602764] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.602807] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.603050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.603069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.604028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.604073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.605025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.605072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.605340] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.605358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.606684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.606734] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.607041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.607082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.607406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.607426] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.608173] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.608217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.609102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.609152] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.609344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.609362] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.611403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.611449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.611918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.611964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.612291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.612312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.612618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.612664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.612974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.613018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.613214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.613233] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.615184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.615231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.616171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.616217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.616508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.616538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.616850] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.616896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.617205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.617250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.617573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.617592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.619696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.619746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.620058] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.620107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.620425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.620444] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.620753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.620800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.621533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.621577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.621826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.621844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.623326] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.623376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.623679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.623723] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.624059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.624082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.625102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.625148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.626104] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.626154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.626371] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.626389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.628188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.628240] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.628737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.628779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.629013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.629031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.629778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.629824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.630771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.630822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.631022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.631040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.633376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.633425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.634208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.634252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.634532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.634550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.635339] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.635383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.636033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.942 [2024-07-15 10:36:01.636089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.636412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.636433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.638823] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.638880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.639586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.639630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.639870] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.639888] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.640413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.640459] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.640754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.640799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.641118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.641137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.643386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.643440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.644430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.644476] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.644774] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.644795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.645114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.645159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.645460] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.645506] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.645786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.645804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.647778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.647836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.648142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.648188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.648502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.648527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.648833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.648877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.649809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.649858] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.650065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.650084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.651691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.651743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.652053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.652102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.652417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.652435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.653264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.653309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.654031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.654077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.654267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.654284] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.656122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.656174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.656219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.656261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.656477] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.656496] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.657356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.657402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.658044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.658089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.658315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.658334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.659999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660400] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660472] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660603] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660801] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.660820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662182] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.943 [2024-07-15 10:36:01.662819] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.662861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.663208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.663231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664443] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664486] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.664993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.665356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.665380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.666950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667054] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667095] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667403] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667444] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667531] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.667780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669849] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.669995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.670037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.670255] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.670273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.671552] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.671597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.671637] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.671678] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672215] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672255] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.672627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.673872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.673922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.673965] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674287] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.674633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676327] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676423] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676697] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676716] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.676909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.677108] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.677128] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678385] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678705] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678747] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.678844] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.679222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.679247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.680825] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.680870] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.680918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.680960] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681184] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.944 [2024-07-15 10:36:01.681577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.682700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.682747] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.682799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.682838] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683200] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683254] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683299] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683785] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.683807] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685173] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685489] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.685682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.686018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.686043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.687642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.687686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.687744] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.687795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688251] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688717] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.688741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690479] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690834] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690853] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.690998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.691038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.691330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.691349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693100] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693144] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693540] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.693972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.695782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.695826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.695882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696206] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696688] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.696733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.697067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.697094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.698836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.699145] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.699191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.699493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.699827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.699847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.699914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.700218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.700267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.700563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.700888] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.700917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.702754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.703067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.703115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.703412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.703746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.703768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.703827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.704135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.704185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.704491] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.704812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.704833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.706463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.945 [2024-07-15 10:36:01.706763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.706812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.707118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.707457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.707478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.707536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.707836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.707883] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.708205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.708560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.708581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.710271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.710581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.710629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.710947] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.711318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.711339] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.711398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.711696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.711746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.712328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.712564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.712584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.714546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.715365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.715411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.716139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.716334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.716352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.716406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.717262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.717314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.717614] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.717959] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.717980] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.719452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.720068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.720116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.720943] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.721147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.721166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.721234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.721542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.721587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.721889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.722233] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:36.946 [2024-07-15 10:36:01.722255] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.723498] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.724304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.724351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.724658] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725459] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.725976] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.726016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.727150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.727459] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.727502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.727798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.728170] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.728192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.728250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.728762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.728803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.729571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.729808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.729827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.731186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.731488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.731534] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.731833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.732037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.732056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.732109] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.732987] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.733039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.733700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.733932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.733950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.735613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.736110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.736155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.736950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.737189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.737209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.737269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.738189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.738239] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.739038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.739329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.739352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.741011] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.741789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.741836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.742428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.742676] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.742694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.742751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.743387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.743447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.743752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.744070] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.744090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.745461] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.746433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.746485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.747531] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.747793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.747818] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.747876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.748179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.748223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.748534] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.748865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.748885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.750102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.750860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.750914] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.751256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.751583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.751606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.751671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.751974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.752017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.752843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.753069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.753087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.754367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.754670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.754715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.755024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.755361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.755383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.755439] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.756338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.756388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.757213] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.757446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.757465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.758861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.759174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.759223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.759772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.760004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.760023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.760079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.760747] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.760791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.761694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.761890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.761913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.763674] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.764433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.765207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.765679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.765894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.765921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.765979] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.766756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.766802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.767123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.767448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.767466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.769411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.770189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.770925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.771227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.771563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.771584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.771912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.772398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.773168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.774110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.774345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.774364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.776313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.776622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.776932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.777236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.777575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.777596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.777917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.778217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.779268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.779568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.779911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.779932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.781827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.209 [2024-07-15 10:36:01.782754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.783686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.784632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.784956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.784984] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.785296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.785593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.785891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.786875] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.787082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.787101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.789103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.790050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.790405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.790703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.790975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.790995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.791305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.792190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.793063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.794081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.794277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.794294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.796145] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.796456] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.796762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.797068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.797325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.797343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.798196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.799141] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.800071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.800685] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.800953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.800972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.802532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.802841] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.803430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.804252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.804450] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.804469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.805431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.805965] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.806772] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.807713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.807917] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.807935] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.809765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.810656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.811597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.812537] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.812738] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.812757] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.813641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.814547] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.815564] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.816457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.816770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.816790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.819194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.820134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.821114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.821767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.822004] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.822022] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.822981] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.823916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.824253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.824554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.824809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.824829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.827021] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.827635] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.828551] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.829494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.829691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.829710] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.830574] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.830874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.831176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.831478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.831773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.831791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.833658] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.834483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.835447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.836408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.836648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.836667] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.836984] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.837284] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.837612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.838431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.838627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.838646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.840646] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.841553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.841933] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.842247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.842509] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.842528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.842840] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.843714] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.844586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.845586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.845782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.845800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.847573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.847898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.848202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.848499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.848732] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.848751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.849562] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.850498] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.851425] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.852010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.852238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.852256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.853718] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.854044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.854530] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.855378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.855576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.855594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.856550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.857105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.857964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.858899] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.859102] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.859120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.860995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.861937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.862873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.863675] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.863899] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.863923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.864699] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.865145] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.865442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.865737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.866077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.866099] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.867963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.868384] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.868683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.868993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.869324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.869345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.870191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.870991] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.871471] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.872247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.872453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.210 [2024-07-15 10:36:01.872472] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.875156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.876037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.876590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.876634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.876867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.876885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.877552] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.877864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.878169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.878474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.878702] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.878719] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.880433] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.880499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.880802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.880848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.881117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.881137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.881447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.881493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.882490] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.882538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.882734] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.882752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.884380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.884432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.884746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.884793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.885096] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.885115] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.885880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.885930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.886566] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.886610] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.886810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.886829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.888613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.888664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.889451] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.889494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.889696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.889713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.890221] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.890267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.891028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.891073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.891303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.891322] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.894256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.894314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.895230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.895277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.895480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.895499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.896273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.896318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.896620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.896665] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.896993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.897016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.899089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.899139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900638] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.900984] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.901037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.901394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.901415] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.903454] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.903504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.904025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.904082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.904432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.904453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.904759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.904803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.905109] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.905154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.905348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.905369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.906797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.906846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.907173] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.907221] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.907555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.907575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.907892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.907942] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.908736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.908780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.908992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.909012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.910657] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.910713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.911030] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.911075] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.911279] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.911298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.912108] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.912155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.912708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.912752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.912983] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.913002] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.914881] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.914951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.915827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.915873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.916080] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.916103] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.916876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.916927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.917795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.917839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.918198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.918227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.920729] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.920778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.921295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.921341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.921587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.921605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.922608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.922654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.923289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.923350] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.923715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.923736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.211 [2024-07-15 10:36:01.926146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.926202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.927001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.927045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.927262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.927281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.928282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.928335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.929268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.929315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.929629] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.929648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.932000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.932049] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.932999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.933042] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.933244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.933263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.934139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.934189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.935114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.935164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.935355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.935373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.937146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.937198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.938187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.938235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.938429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.938447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.939466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.939510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.940246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.940289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.940483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.940501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.942157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.942206] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.942503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.942559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.942756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.942775] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.943764] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.943818] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.944587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.944631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.944941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.944961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.947230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.947281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.947592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.947636] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.947964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.947986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.948294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.948341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.948640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.948700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.949041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.949060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951498] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.951874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.952178] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.952225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.952553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.952575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.954986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.955025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.955063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.955114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.955438] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.955457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957397] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.957964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.958015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.958276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.958295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.960945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.961036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.961091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.961453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.961494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963475] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.963959] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.964019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.964076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.964347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.964383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966321] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966658] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966688] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966778] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966862] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.966998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.967349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.967383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969056] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969838] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.969988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.970225] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.970256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.972154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.212 [2024-07-15 10:36:01.972222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972279] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972343] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972862] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.972936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.973017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.973426] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.973473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975339] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975393] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975670] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.975934] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.976171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.976228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.978895] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.979136] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.979175] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.981453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.981714] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.981916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.982101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.983078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.983198] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.983383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.983567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.983762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.983968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.984776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.984897] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.989941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990051] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990436] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990453] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.990783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992671] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992739] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992926] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.213 [2024-07-15 10:36:01.992941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.470 [2024-07-15 10:36:01.994527] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.470 [2024-07-15 10:36:01.994563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.470 [2024-07-15 10:36:01.994594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.470 [2024-07-15 10:36:01.994625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.470 [2024-07-15 10:36:01.994824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.470 [2024-07-15 10:36:01.994838] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.994887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.994925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.994957] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.994993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.995315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.995333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996837] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996943] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.996976] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.997008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.997192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.997207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.998692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.998731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.998767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.998802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999144] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999204] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999673] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:01.999687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.000990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.001028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.001061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.001781] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002144] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002168] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002254] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002530] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.002544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.004136] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.004724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.004759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.005535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.005775] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.005791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.005846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.006855] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.006909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.007207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.007488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.007503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.008935] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.009752] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.009799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.010785] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.010997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.011012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.011058] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.011478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.011514] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.012233] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.012572] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.012596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.014066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.014852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.014889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.015440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.015626] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.015640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.015688] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.016620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.016664] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.016971] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.017275] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.017295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.018754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.019368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.019407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.020280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.020465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.020480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.020522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.020824] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.020860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.471 [2024-07-15 10:36:02.021160] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.021447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.021462] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.022822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.023834] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.023874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.024908] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.025279] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.025298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.025356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.025652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.025687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.025992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.026309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.026329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.027547] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.028587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.028632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.028934] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.029245] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.029264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.029309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.029605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.029642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.029958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.030143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.030158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.031312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.031609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.031645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.031937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.032228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.032244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.032293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.032577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.032611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.033467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.033651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.033666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.035207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.035501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.035545] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.035834] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.036023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.036039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.036085] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.037054] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.037100] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.037387] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.037688] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.037704] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.039428] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.039721] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.039763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.040062] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.040350] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.040365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.040416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.040702] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.040737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.041029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.041267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.041281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.042885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:37.472 [2024-07-15 10:36:02.043183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.537867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.539124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.539396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.539420] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.539483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.540677] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.540729] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.541938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.542389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.542416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.544220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.545563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.545616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.546623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.546953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.546975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.547037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.548369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.548418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.549281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.549682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.549704] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.551642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.552979] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.553037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.554028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.554319] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.554341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.554406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.555380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.555417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.556388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.556667] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.556685] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.558123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.558431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.558467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.559300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.559519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.559535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.559580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.560548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.560585] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.561612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.561862] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.561879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.563031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.563953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.563998] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.564296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.564590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.564606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.564648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.565414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.565450] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.565743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.565934] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.565951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.567130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.567968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.568006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.568972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.569162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.569178] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.569223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.569803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.569845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.570713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.571052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.571073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.572513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.573344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.574314] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.574793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.574985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.575003] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.575053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.576035] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.576081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.576856] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.577154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.577172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.579857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.580808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.581390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.582181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.038 [2024-07-15 10:36:02.582389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.582407] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.583236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.583567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.583939] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.584736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.585088] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.585110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.587127] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.587630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.588563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.588864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.589130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.589147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.589857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.590159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.591121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.592007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.592375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.592392] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.593913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.594684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.595092] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.595471] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.595659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.595677] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.596672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.597297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.598089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.598679] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.598868] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.598884] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.601045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.601828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.602623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.603548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.603804] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.603820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.604379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.605386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.605685] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.606345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.606583] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.606601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.608996] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.609911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.610379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.611060] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.611381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.611401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.612034] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.612542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.612830] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.613777] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.613973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.614000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.615981] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.616277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.617166] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.617464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.617787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.617803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.618702] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.619708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.620364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.621124] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.621338] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.621361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.623007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.623473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.624222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.624978] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.625179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.625195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.625955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.626480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.627477] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.627767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.628055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.628072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.630268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.631122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.631889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.632310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.632495] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.632512] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.632810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.633382] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.633951] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.634242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.634426] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.634442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.636025] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.636701] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.636992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.637859] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.638201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.638220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.638517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.639432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.640441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.641061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.641285] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.641301] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.643549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.643894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.644289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.645036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.645220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.645235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.645841] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.646588] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.647203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.648164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.648506] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.648525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.650990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.651962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.652648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.653405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.653634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.653652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.654473] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.654792] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.655140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.655933] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.656273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.656292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.658244] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.659179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.659684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.660543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.660875] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.660897] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.661354] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.662057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.662344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.663249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.663490] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.663507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.665552] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.666089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.666694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.666988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.667177] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.667194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.667609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.667940] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.668741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.669663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.669849] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.669865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.671952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.672418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.673263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.673552] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.673801] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.673818] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.674513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.674801] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.675699] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.676555] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.676741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.676760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.678810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.679613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.680441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.680753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.681075] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.681094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.682087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.039 [2024-07-15 10:36:02.682383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.682944] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.683746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.683932] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.683949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.685976] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.686936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.687510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.687548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.687819] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.687835] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.688138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.689014] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.689308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.689782] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.689992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.690008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.691913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.691953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.692865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.692904] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.693196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.693212] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.694098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.694135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.694423] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.694458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.694650] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.694665] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.697104] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.697158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.697455] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.697501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.697689] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.697705] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.698201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.698240] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.698532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.698567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.698749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.698773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701034] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701410] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.701948] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.702240] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.702281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.702543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.702560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.704413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.704455] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.704743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.704794] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.705073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.705090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.705531] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.705570] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.706169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.706208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.706523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.706543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.708300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.708342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.709150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.709185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.709503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.709523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.710097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.710133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.710594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.710631] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.710949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.710967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.712846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.712885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.713523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.713560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.713885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.713908] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.714210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.714252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.714543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.714581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.714760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.714776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.716653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.716698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.716991] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.717031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.717210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.717226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.717554] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.717591] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.718038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.718074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.718257] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.718275] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.720441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.720485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.720769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.720814] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.721985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.723887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.723936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.724229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.724269] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.724543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.724560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.724929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.724967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.725608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.725642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.725961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.725981] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.727974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.728018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.728857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.728898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.729081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.729097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.729612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.729649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.730162] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.730202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.730518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.730536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.732669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.732710] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.733273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.733312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.733521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.733536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.734307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.734348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.735254] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.735300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.735643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.735661] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.738250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.040 [2024-07-15 10:36:02.738304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.739236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.739283] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.739493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.739510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.740266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.740304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.740812] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.740857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.741043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.741059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.743108] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.743148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.743896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.743938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.744139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.744159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.745015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.745052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.745873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.745919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.746274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.746291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.747960] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.748005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.748798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.748840] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.749063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.749079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.749618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.749656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.750447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.750485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.750666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.750682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.752761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.752801] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.753092] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.753131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.753309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.753326] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.754147] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.754186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.754771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.754808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.755027] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.755043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.757255] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.757298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.757335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.757376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.757711] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.757730] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.758140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.758176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.758921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.758963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.759148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.759164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760461] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760806] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760880] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.760950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.761156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.761172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762624] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762933] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.762978] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.763010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.763040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.763070] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.763277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.763292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.764730] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.764768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.764811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.764852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765052] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765094] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765196] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.765523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.766763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.766797] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.766831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.766861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767152] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767182] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767241] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.767520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.768974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769041] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769482] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769729] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.769746] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.770944] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.770980] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771245] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771321] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.771753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773359] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773566] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773624] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773655] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773716] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.773981] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.775820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.776005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.776021] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777615] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777650] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777900] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.777992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.778023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.778059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.778242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.041 [2024-07-15 10:36:02.778257] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780913] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780928] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.780967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.781016] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.781047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.781090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.781271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.781286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784374] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784475] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784539] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784849] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.784869] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.787843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.787887] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.787927] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.787959] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788285] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.788653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792193] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792232] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792534] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792617] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792647] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792678] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.792986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796633] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796724] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.796792] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.797155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.797173] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799444] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799486] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799875] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.799970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.800001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.800033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.800277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.800293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.802751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.802807] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.802843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803513] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803612] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.803894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.807700] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.808014] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.808074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.808718] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.808962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.808979] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.809027] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.809317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.809354] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.810346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.810684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.810704] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.813432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.814050] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.814091] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.814376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.814651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.814669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.814715] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.815546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.815584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.816007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.816202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.816227] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.818038] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.818845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.818881] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.819924] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.820123] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.820139] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.820187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.821140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.821194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.822150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.822493] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.042 [2024-07-15 10:36:02.822514] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.824771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.825792] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.825843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.826694] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.826912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.826945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.826990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.827923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.827958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.828923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.829207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.829223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.832110] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.832943] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.832987] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.834013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.834215] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.834232] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.834280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.835077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.835114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.835945] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.836130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.836146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.838649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.838952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.838988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.839975] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.840158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.840174] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.840214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.841172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.841217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.842086] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.842311] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.842327] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.844888] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.845353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.845389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.845961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.846285] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.846305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.846347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.847213] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.847267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.848202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.848380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.848396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.851449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.852389] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.852441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.852733] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.853023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.853040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.853080] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.853787] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.853822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.854113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.854297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.854313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.855463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.856268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.856305] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.857215] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.857400] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.857416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.857458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.300 [2024-07-15 10:36:02.858098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.858145] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.859076] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.859402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.859420] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.860955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.861754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.861790] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.862718] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.862904] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.862920] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.862963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.863525] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.863560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.864376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.864561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.864577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.865737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.866034] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.866074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.866994] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.867347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.867366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.867406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.867931] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.867968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.868767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.868953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.868969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.870156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.871186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.871223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872306] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872364] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.872985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.873270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.873286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.874337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.875165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.875202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.876093] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.876274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.876290] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.876334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.877304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.877348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.877640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.877952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.877970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.879449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.880396] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.880440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.881410] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.881651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.881666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.881706] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.882503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.882539] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.883465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.883649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.883666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.885390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.886029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.886064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.886861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.887053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.887070] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.887113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.888035] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.888071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.888661] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.888882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.888898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.890378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.890692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.891169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.891923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.892106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.892122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.892169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.892958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.892994] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.893796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.894073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.894098] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.896657] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.897543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.898309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.899068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.899309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.899331] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.899630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.899921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.900208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.900937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.901151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.901171] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.902674] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.902968] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.903258] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.904068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.904282] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.904298] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.904956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.905961] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.906864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.907164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.907516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.907535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.909686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.910590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.911408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.911721] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.912059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.912077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.912368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.912656] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.913469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.914214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.914488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.914506] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.916183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.916477] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.917316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.918073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.918329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.918352] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.919334] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.920302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.920611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.920906] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.921203] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.921220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.923568] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.924465] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.924762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.925057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.925348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.925365] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.925654] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.926563] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.927378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.927851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.928061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.928077] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.929889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.930784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.931584] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.932051] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.932256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.932272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.933283] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.933579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.933867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.934159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.934490] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.934507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.936691] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.936992] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.937281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.937571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.937894] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.937918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.938890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.939842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.301 [2024-07-15 10:36:02.940444] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.941210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.941406] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.941423] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.944037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.944864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.945344] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.946101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.946286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.946302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.946602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.946890] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.947195] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.947680] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.947898] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.947918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.949308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.949601] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.949892] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.950353] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.950566] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.950581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.951571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.952238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.952990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.953539] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.953865] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.953882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.956231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.956714] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.957536] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.958457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.958644] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.958660] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.959349] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.959653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.959942] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.960228] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.960500] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.960516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.962373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.963289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.964210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.964517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.964840] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.964860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.965157] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.965446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.966332] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.967181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.967368] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.967384] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.969457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.970329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.970620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.970910] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.971270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.971290] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.971620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.972427] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.973355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.974190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.974377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.974394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.976039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.976331] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.976618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.976910] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.977119] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.977134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.977942] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.978867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.979796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.980325] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.980561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.980577] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.982015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.982304] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.982643] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.983444] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.983632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.983648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.984593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.985435] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.986295] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.987113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.987300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.987317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.989419] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.990226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.991185] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.991781] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.992079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.992097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.992395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.992683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.992977] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.993268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.993521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.993538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.995622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.995929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.996220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.996262] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.996560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.996578] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.996869] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.997170] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.997464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.997757] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.998067] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.998085] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:02.999967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000301] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000339] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000649] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000954] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.000995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.001283] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.001320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.001628] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.001648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.003751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.003793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.004082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.004120] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.004437] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.004457] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.004756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.004793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.005100] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.005140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.005464] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.005485] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.007337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.007376] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.007662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.007709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.008071] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.008092] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.008386] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.008442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.008736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.008780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.009089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.009107] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.011350] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.011391] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.011770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.011810] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012131] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012150] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012474] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012761] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012795] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012983] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.012999] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.014557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.014599] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.014886] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.014928] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.015217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.015234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.015526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.015560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.016538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.016575] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.302 [2024-07-15 10:36:03.016759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.016775] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.018357] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.018398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.018682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.018722] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.019061] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.019078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.020033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.020087] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.021015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.021064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.021280] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.021296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.023003] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.023045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.023456] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.023492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.023725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.023741] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.024668] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.024709] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.025544] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.025580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.025783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.025799] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.029019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.029066] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.029866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.029905] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.030122] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.030137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.030673] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.030717] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.031008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.031044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.031328] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.031345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.033377] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.033418] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.034183] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.034221] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.034532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.034548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.034847] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.034885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.035197] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.035236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.035559] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.035580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.037469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.037510] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.037798] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.037836] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.038159] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.038178] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.038470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.038507] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.038794] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.038829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.039039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.039057] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.040480] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.040521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.040809] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.040843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.041186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.041207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.041497] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.041533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.042302] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.042339] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.042522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.042538] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.044078] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.044118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.044416] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.044452] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.044743] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.044760] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.045547] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.045586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.046318] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.046358] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.046543] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.046560] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.048356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.048398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.048938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.048974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.049193] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.049209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.050015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.050058] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.050988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.051031] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.051217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.051236] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.053388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.053429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.054181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.054218] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.054441] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.054458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.055402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.055448] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.056417] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.056469] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.056736] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.056753] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.059146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.059186] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.059749] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.059788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.059974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.059995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.060884] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.060931] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.061226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.061261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.061590] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.061610] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.063588] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.063632] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.063665] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.063696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.063891] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.063910] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.064921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.064965] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.065252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.065289] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.065644] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.065663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067079] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067642] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067672] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.067905] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.069494] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.303 [2024-07-15 10:36:03.069532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069565] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069598] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069877] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069935] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069967] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.069997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.070028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.070243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.070258] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.071580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.071618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.071653] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.071687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.071982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072000] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072037] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072069] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072101] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.072466] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074043] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074156] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074501] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074630] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074661] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.074970] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.076663] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.076698] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.076734] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.076767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077125] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077212] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077621] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.077637] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079410] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079515] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079854] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.079989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.080264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.080281] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082055] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082097] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082471] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082532] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.082828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.084706] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.084756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.084803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.084852] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085045] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085064] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085207] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085241] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.085446] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.086570] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.086615] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.086648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.086681] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087008] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087072] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087109] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087188] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.304 [2024-07-15 10:36:03.087550] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.088786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.088833] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.088866] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.088897] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089310] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089412] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089624] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.089640] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.090879] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.090921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.090955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.090988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091292] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091308] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091379] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091411] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091442] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091770] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.091789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.092846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.092881] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.092918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.092949] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093285] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093348] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093378] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093593] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.093609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.094853] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.094893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.094936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.094972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095408] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095440] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095475] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.095823] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.096882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.096921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.096952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.096988] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.097232] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.097248] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.097288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.577 [2024-07-15 10:36:03.097320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.097350] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.097380] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.097597] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.097613] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.098863] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.098915] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.098950] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.098985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099247] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099332] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099405] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099731] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.099750] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.100771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.100806] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.100842] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101517] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101729] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101820] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101851] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.101882] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.102065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.102081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.103599] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.103893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.103935] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.104732] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.104953] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.104969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.105015] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.105941] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.105977] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.106962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.107230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.107246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.108294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.108589] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.108625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.108918] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.109255] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.109274] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.109315] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.109936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.109972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.110769] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.110957] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.110973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.112113] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.113111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.113154] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.113952] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.114234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.114252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.114288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.114573] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.114619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.114909] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.115209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.115224] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.116249] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.116958] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.116993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.117791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.117978] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.117995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.118040] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.118964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.119007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.119296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.119622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.119641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.121063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.121995] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.122033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.122964] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.123332] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.123347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.123390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.124193] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.124229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.125151] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.125335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.125351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.127007] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.127478] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.127514] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.128316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.128499] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.128515] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.128558] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.129488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.129524] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.130044] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.130235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.130252] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.131401] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.131693] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.131728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.132010] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.132311] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.132330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.132369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.133181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.133217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.134146] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.134329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.134346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.135641] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.135936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.135979] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.136266] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.136526] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.136542] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.136582] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.137336] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.137375] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.137907] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.138089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.138105] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.139637] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.139936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.139973] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.140748] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.140977] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.140993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.141039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.141571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.141610] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.142410] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.142592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.142611] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.144342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.145053] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.145089] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.145839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.146145] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.146163] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.146210] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.147065] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.147108] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.147929] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.148217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.148234] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.149872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.150625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.150662] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.151121] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.151313] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.151330] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.151388] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.152297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.152337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.152625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.152956] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.152974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.154373] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.154831] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.154868] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.155627] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.155811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.155827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.578 [2024-07-15 10:36:03.155876] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.156169] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.156209] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.156500] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.156848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.156867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.158133] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.158896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.158938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.159539] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.159845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.159861] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.159908] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.160202] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.160237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.160523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.160773] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.160789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.161923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.162571] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.162609] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.162897] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.163206] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.163222] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.163264] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.163553] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.163589] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.164394] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.164602] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.164618] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.165829] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.166140] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.166179] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.166463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.166788] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.166808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.166849] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.167623] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.167659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.168449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.168696] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.168713] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.170039] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.170329] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.170617] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.171487] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.171714] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.171730] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.171775] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.172229] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.172267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.173020] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.173201] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.173217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.175516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.176273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.176864] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.177828] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.178019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.178036] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.178432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.178727] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.179017] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.179303] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.179488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.179503] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.181030] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.181322] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.181608] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.181896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.182090] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.182106] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.182860] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.183385] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.184276] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.185237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.185516] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.185533] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.187881] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.188703] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.189521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.190283] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.190587] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.190604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.190906] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.191194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.191481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.192293] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.192504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.192521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.194205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.194502] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.194789] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.195081] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.195271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.195287] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.195997] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.196528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.197366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.198238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.198413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.198429] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.200111] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.200936] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.201734] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.202659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.202846] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.202862] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.203398] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.204193] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.205114] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.206059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.206309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.206333] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.208919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.209859] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.210780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.211413] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.211600] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.211616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.212430] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.213361] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.214370] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.214666] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.214993] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.215013] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.217323] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.218250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.218767] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.219581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.219764] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.219780] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.220764] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.221645] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.221938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.222250] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.222586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.222606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.224659] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.225205] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.226009] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.226942] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.227126] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.227142] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.227779] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.228083] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.228369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.228660] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.228873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.228889] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.230763] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.231581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.232504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.232802] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.233118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.233138] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.233432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.233721] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.234689] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.235625] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.235811] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.235827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.237766] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.238063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.238355] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.238648] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.238972] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.238990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.239291] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.239581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.579 [2024-07-15 10:36:03.239881] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.240176] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.240504] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.240522] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.242402] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.242690] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.242982] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.243273] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.243529] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.243546] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.243845] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.244137] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.244424] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.244717] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.245029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.245046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.246916] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.247214] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.247520] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.247808] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.248143] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.248164] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.248458] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.248751] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.249046] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.249335] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.249687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.249707] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.251605] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.251897] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.252192] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.252484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.252848] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.252867] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.253172] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.253461] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.253919] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.254586] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.254768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.254784] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.256600] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.257082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.257832] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.258548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.258739] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.258755] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.259508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.260003] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.260294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.260581] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.260896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.260921] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.262826] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.263422] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.263728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.264018] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.264347] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.264367] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.264930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.265683] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.266339] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.267324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.267518] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.267534] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.269535] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.270286] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.271073] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.271116] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.271300] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.271317] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.272153] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.272447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.272737] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.273033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.273360] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.273381] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.275237] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.275278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.275567] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.275606] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.275925] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.275946] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.276240] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.276277] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.276561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.276596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.276776] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.276800] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.278194] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.278235] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.278523] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.278557] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.278873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.278893] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.279187] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.279224] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.279989] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.280028] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.280208] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.280224] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.281735] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.281775] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.282082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.282119] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.282414] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.282434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.283230] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.283270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.284024] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.284063] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.284246] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.284263] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.286074] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.286118] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.286652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.286687] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.286931] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.286948] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.287768] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.287814] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.288740] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.288783] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.288984] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.289001] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.291059] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.291099] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.291896] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.291937] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.292130] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.292148] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.292985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.293023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.293871] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.293912] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.294191] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.294213] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.296596] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.296637] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.297354] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.297395] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.297576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.297592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.298351] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.298390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.298682] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.298720] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.299047] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.299068] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.301342] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.301383] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.580 [2024-07-15 10:36:03.302270] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.302312] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.302492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.302508] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.302978] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.303019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.303307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.303345] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.303619] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.303636] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.305756] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.305796] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.306652] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.306692] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.306969] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.306990] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.307294] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.307332] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.307620] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.307657] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.307986] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.308006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.309878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.309922] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.310238] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.310278] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.310594] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.310614] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.310923] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.310963] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.311261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.311297] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.311476] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.311492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.312962] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.313005] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.313290] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.313324] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.313639] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.313657] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.314484] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.314521] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.315427] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.315470] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.315754] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.315771] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.317793] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.317843] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.318135] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.318175] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.318483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.318500] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.318791] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.318837] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.319128] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.319165] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.319467] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.319483] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.321356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.321399] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.321686] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.321725] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.322011] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.322029] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.322327] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.322366] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.322651] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.322684] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.323012] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.323033] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.324885] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.324930] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.325217] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.325259] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.325531] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.325548] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.326219] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.326261] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.326600] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.326634] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.326955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.326974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.328839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.328878] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.329786] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.329822] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.330006] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.330023] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.330948] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.330985] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.331346] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.331385] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.331720] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.331740] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.334117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.334158] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.335117] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.335155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.335420] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.335436] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.336272] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.336309] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.337231] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.337268] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.337447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.337463] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.339267] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.339307] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.339341] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.339372] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.339622] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.339637] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.340579] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.340616] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.341541] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.341576] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.341839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.341857] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.342874] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.342911] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.342955] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.342987] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343226] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343243] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343288] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343320] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343390] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343708] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.343728] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345189] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345220] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345256] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345434] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345449] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345492] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345528] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345561] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345592] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345803] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.345827] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.346839] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.346873] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.346908] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.346938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347271] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347296] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347337] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347369] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347400] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347431] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347745] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.347765] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349155] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349190] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349223] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349253] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349432] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349447] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349488] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349519] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349549] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349580] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349759] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.349774] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.351974] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.581 [2024-07-15 10:36:03.352019] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.582 [2024-07-15 10:36:03.352048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.582 [2024-07-15 10:36:03.352082] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.582 [2024-07-15 10:36:03.352301] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:38.582 [2024-07-15 10:36:03.352316] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:28:39.144 00:28:39.144 Latency(us) 00:28:39.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.144 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x0 length 0x100 00:28:39.144 crypto_ram : 5.46 49.80 3.11 0.00 0.00 2521686.40 1703.94 1785095.78 00:28:39.144 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x100 length 0x100 00:28:39.144 crypto_ram : 5.62 68.37 4.27 0.00 0.00 1840744.58 131701.15 1691143.37 00:28:39.144 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x0 length 0x100 00:28:39.144 crypto_ram1 : 5.46 49.79 3.11 0.00 0.00 2470723.37 1861.22 1785095.78 00:28:39.144 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x100 length 0x100 00:28:39.144 crypto_ram1 : 5.62 68.36 4.27 0.00 0.00 1803523.41 131701.15 1577058.30 00:28:39.144 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x0 length 0x100 00:28:39.144 crypto_ram2 : 5.36 357.44 22.34 0.00 0.00 336644.86 54945.38 687865.86 00:28:39.144 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x100 length 0x100 00:28:39.144 crypto_ram2 : 5.36 462.45 28.90 0.00 0.00 258824.63 5662.31 404330.91 00:28:39.144 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x0 length 0x100 00:28:39.144 crypto_ram3 : 5.40 364.67 22.79 0.00 0.00 323437.98 16882.07 687865.86 00:28:39.144 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:39.144 Verification LBA range: start 0x100 length 0x100 00:28:39.144 crypto_ram3 : 5.43 471.24 29.45 0.00 0.00 249969.53 40684.75 340577.48 00:28:39.144 =================================================================================================================== 00:28:39.144 Total : 1892.11 118.26 0.00 0.00 519873.57 1703.94 1785095.78 00:28:39.401 00:28:39.401 real 0m8.543s 00:28:39.401 user 0m15.829s 00:28:39.401 sys 0m0.375s 00:28:39.401 10:36:04 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:39.401 10:36:04 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.401 ************************************ 00:28:39.401 END TEST bdev_verify_big_io 00:28:39.401 ************************************ 00:28:39.401 10:36:04 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:39.401 10:36:04 blockdev_crypto_qat -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:39.401 10:36:04 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:39.401 10:36:04 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.401 10:36:04 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:39.401 ************************************ 00:28:39.401 START TEST bdev_write_zeroes 00:28:39.401 ************************************ 00:28:39.401 10:36:04 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:39.401 [2024-07-15 10:36:04.170195] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:39.401 [2024-07-15 10:36:04.170240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960831 ] 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:39.659 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:39.659 [2024-07-15 10:36:04.259674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.659 [2024-07-15 10:36:04.330393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.659 [2024-07-15 10:36:04.351340] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:28:39.659 [2024-07-15 10:36:04.359361] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:28:39.659 [2024-07-15 10:36:04.367380] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:28:39.916 [2024-07-15 10:36:04.466134] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:28:42.435 [2024-07-15 10:36:06.599903] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:28:42.435 [2024-07-15 10:36:06.599958] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:28:42.435 [2024-07-15 10:36:06.599968] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:42.435 [2024-07-15 10:36:06.607917] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:28:42.435 [2024-07-15 10:36:06.607930] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:28:42.435 [2024-07-15 10:36:06.607937] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:42.435 [2024-07-15 10:36:06.615934] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:28:42.435 [2024-07-15 10:36:06.615946] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:28:42.435 [2024-07-15 10:36:06.615953] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:42.435 [2024-07-15 10:36:06.623954] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:28:42.435 [2024-07-15 10:36:06.623965] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:28:42.435 [2024-07-15 10:36:06.623973] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:28:42.435 Running I/O for 1 seconds... 00:28:43.000 00:28:43.000 Latency(us) 00:28:43.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.000 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:43.000 crypto_ram : 1.01 3168.63 12.38 0.00 0.00 40212.65 3434.09 50121.93 00:28:43.000 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:43.000 crypto_ram1 : 1.02 3181.98 12.43 0.00 0.00 39924.70 3565.16 46556.77 00:28:43.000 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:43.000 crypto_ram2 : 1.01 24736.10 96.63 0.00 0.00 5124.88 1612.19 7077.89 00:28:43.000 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:43.000 crypto_ram3 : 1.01 24769.39 96.76 0.00 0.00 5106.97 1612.19 5636.10 00:28:43.000 =================================================================================================================== 00:28:43.000 Total : 55856.09 218.19 0.00 0.00 9100.59 1612.19 50121.93 00:28:43.257 00:28:43.257 real 0m3.888s 00:28:43.257 user 0m3.559s 00:28:43.257 sys 0m0.290s 00:28:43.257 10:36:08 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.257 10:36:08 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:43.257 ************************************ 00:28:43.257 END TEST bdev_write_zeroes 00:28:43.257 ************************************ 00:28:43.515 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 0 00:28:43.515 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:43.515 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:43.515 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.515 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:43.515 ************************************ 00:28:43.515 START TEST bdev_json_nonenclosed 00:28:43.515 ************************************ 00:28:43.515 10:36:08 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:43.515 [2024-07-15 10:36:08.143030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:43.515 [2024-07-15 10:36:08.143070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961685 ] 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:43.515 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.515 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:43.515 [2024-07-15 10:36:08.230287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.515 [2024-07-15 10:36:08.298737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.515 [2024-07-15 10:36:08.298794] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:43.515 [2024-07-15 10:36:08.298807] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:43.515 [2024-07-15 10:36:08.298816] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:43.790 00:28:43.790 real 0m0.281s 00:28:43.790 user 0m0.161s 00:28:43.790 sys 0m0.118s 00:28:43.790 10:36:08 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:28:43.790 10:36:08 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.790 10:36:08 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:43.790 ************************************ 00:28:43.790 END TEST bdev_json_nonenclosed 00:28:43.790 ************************************ 00:28:43.790 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:28:43.790 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@782 -- # true 00:28:43.790 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:43.790 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:28:43.790 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.790 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:43.790 ************************************ 00:28:43.790 START TEST bdev_json_nonarray 00:28:43.790 ************************************ 00:28:43.790 10:36:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:43.790 [2024-07-15 10:36:08.515168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:43.790 [2024-07-15 10:36:08.515213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961752 ] 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:43.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.790 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:43.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.791 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:43.791 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:43.791 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:44.066 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.066 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:44.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:44.067 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:44.067 [2024-07-15 10:36:08.609080] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.067 [2024-07-15 10:36:08.678524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.067 [2024-07-15 10:36:08.678587] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:44.067 [2024-07-15 10:36:08.678601] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:44.067 [2024-07-15 10:36:08.678609] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:44.067 00:28:44.067 real 0m0.288s 00:28:44.067 user 0m0.169s 00:28:44.067 sys 0m0.117s 00:28:44.067 10:36:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:28:44.067 10:36:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.067 10:36:08 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:28:44.067 ************************************ 00:28:44.067 END TEST bdev_json_nonarray 00:28:44.067 ************************************ 00:28:44.067 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1142 -- # return 234 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@785 -- # true 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@787 -- # [[ crypto_qat == bdev ]] 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@794 -- # [[ crypto_qat == gpt ]] 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@798 -- # [[ crypto_qat == crypto_sw ]] 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@811 -- # cleanup 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:28:44.067 10:36:08 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:28:44.067 00:28:44.067 real 1m6.984s 00:28:44.067 user 2m43.338s 00:28:44.067 sys 0m7.301s 00:28:44.067 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:44.067 10:36:08 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:28:44.067 ************************************ 00:28:44.067 END TEST blockdev_crypto_qat 00:28:44.067 ************************************ 00:28:44.067 10:36:08 -- common/autotest_common.sh@1142 -- # return 0 00:28:44.067 10:36:08 -- spdk/autotest.sh@360 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:44.067 10:36:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:44.067 10:36:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.067 10:36:08 -- common/autotest_common.sh@10 -- # set +x 00:28:44.325 ************************************ 00:28:44.325 START TEST chaining 00:28:44.325 ************************************ 00:28:44.325 10:36:08 chaining -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:28:44.325 * Looking for test storage... 00:28:44.325 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:28:44.325 10:36:08 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.325 10:36:08 chaining -- nvmf/common.sh@7 -- # uname -s 00:28:44.325 10:36:09 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.325 10:36:09 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.325 10:36:09 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.325 10:36:09 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.325 10:36:09 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.325 10:36:09 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:44.326 10:36:09 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.326 10:36:09 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.326 10:36:09 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.326 10:36:09 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 10:36:09 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 10:36:09 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 10:36:09 chaining -- paths/export.sh@5 -- # export PATH 00:28:44.326 10:36:09 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@47 -- # : 0 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:44.326 10:36:09 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:28:44.326 10:36:09 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:28:44.326 10:36:09 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:28:44.326 10:36:09 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:28:44.326 10:36:09 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:28:44.326 10:36:09 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.326 10:36:09 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:44.326 10:36:09 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:44.326 10:36:09 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:28:44.326 10:36:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@296 -- # e810=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@297 -- # x722=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@298 -- # mlx=() 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:28:54.285 Found 0000:20:00.0 (0x8086 - 0x159b) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:28:54.285 Found 0000:20:00.1 (0x8086 - 0x159b) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:28:54.285 Found net devices under 0000:20:00.0: cvl_0_0 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:28:54.285 Found net devices under 0000:20:00.1: cvl_0_1 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:54.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:28:54.285 00:28:54.285 --- 10.0.0.2 ping statistics --- 00:28:54.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.285 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:54.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:28:54.285 00:28:54.285 --- 10.0.0.1 ping statistics --- 00:28:54.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.285 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@422 -- # return 0 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:54.285 10:36:17 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:28:54.285 10:36:17 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:54.285 10:36:17 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:54.285 10:36:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 10:36:17 chaining -- nvmf/common.sh@481 -- # nvmfpid=1966005 00:28:54.286 10:36:17 chaining -- nvmf/common.sh@482 -- # waitforlisten 1966005 00:28:54.286 10:36:17 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:54.286 10:36:17 chaining -- common/autotest_common.sh@829 -- # '[' -z 1966005 ']' 00:28:54.286 10:36:17 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.286 10:36:17 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:54.286 10:36:17 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.286 10:36:17 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:54.286 10:36:17 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 [2024-07-15 10:36:17.838156] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:54.286 [2024-07-15 10:36:17.838208] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.286 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.286 [2024-07-15 10:36:17.936509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.286 [2024-07-15 10:36:18.008672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.286 [2024-07-15 10:36:18.008710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.286 [2024-07-15 10:36:18.008719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.286 [2024-07-15 10:36:18.008727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.286 [2024-07-15 10:36:18.008734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.286 [2024-07-15 10:36:18.008759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@862 -- # return 0 00:28:54.286 10:36:18 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 10:36:18 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@69 -- # mktemp 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.bgDHyGELUO 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@69 -- # mktemp 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.Tk9mKSEViU 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 malloc0 00:28:54.286 true 00:28:54.286 true 00:28:54.286 [2024-07-15 10:36:18.716627] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:28:54.286 crypto0 00:28:54.286 [2024-07-15 10:36:18.724652] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:28:54.286 crypto1 00:28:54.286 [2024-07-15 10:36:18.732738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.286 [2024-07-15 10:36:18.748890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@85 -- # update_stats 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:54.286 10:36:18 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.bgDHyGELUO bs=1K count=64 00:28:54.286 64+0 records in 00:28:54.286 64+0 records out 00:28:54.286 65536 bytes (66 kB, 64 KiB) copied, 0.00105924 s, 61.9 MB/s 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.bgDHyGELUO --ob Nvme0n1 --bs 65536 --count 1 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@25 -- # local config 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:54.286 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:54.286 "subsystems": [ 00:28:54.286 { 00:28:54.286 "subsystem": "bdev", 00:28:54.286 "config": [ 00:28:54.286 { 00:28:54.286 "method": "bdev_nvme_attach_controller", 00:28:54.286 "params": { 00:28:54.286 "trtype": "tcp", 00:28:54.286 "adrfam": "IPv4", 00:28:54.286 "name": "Nvme0", 00:28:54.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.286 "traddr": "10.0.0.2", 00:28:54.286 "trsvcid": "4420" 00:28:54.286 } 00:28:54.286 }, 00:28:54.286 { 00:28:54.286 "method": "bdev_set_options", 00:28:54.286 "params": { 00:28:54.286 "bdev_auto_examine": false 00:28:54.286 } 00:28:54.286 } 00:28:54.286 ] 00:28:54.286 } 00:28:54.286 ] 00:28:54.286 }' 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:54.286 "subsystems": [ 00:28:54.286 { 00:28:54.286 "subsystem": "bdev", 00:28:54.286 "config": [ 00:28:54.286 { 00:28:54.286 "method": "bdev_nvme_attach_controller", 00:28:54.286 "params": { 00:28:54.286 "trtype": "tcp", 00:28:54.286 "adrfam": "IPv4", 00:28:54.286 "name": "Nvme0", 00:28:54.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.286 "traddr": "10.0.0.2", 00:28:54.286 "trsvcid": "4420" 00:28:54.286 } 00:28:54.286 }, 00:28:54.286 { 00:28:54.286 "method": "bdev_set_options", 00:28:54.286 "params": { 00:28:54.286 "bdev_auto_examine": false 00:28:54.286 } 00:28:54.286 } 00:28:54.286 ] 00:28:54.286 } 00:28:54.286 ] 00:28:54.286 }' 00:28:54.286 10:36:18 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.bgDHyGELUO --ob Nvme0n1 --bs 65536 --count 1 00:28:54.286 [2024-07-15 10:36:19.047971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:54.286 [2024-07-15 10:36:19.048016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966314 ] 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.543 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.543 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.544 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.544 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.544 [2024-07-15 10:36:19.138916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.544 [2024-07-15 10:36:19.209323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.058  Copying: 64/64 [kB] (average 15 MBps) 00:28:55.058 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@96 -- # update_stats 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:55.059 10:36:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.059 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:55.317 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.317 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.317 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:55.317 10:36:19 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:55.318 10:36:19 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.318 10:36:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:55.318 10:36:19 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.Tk9mKSEViU --ib Nvme0n1 --bs 65536 --count 1 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@25 -- # local config 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:55.318 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:55.318 "subsystems": [ 00:28:55.318 { 00:28:55.318 "subsystem": "bdev", 00:28:55.318 "config": [ 00:28:55.318 { 00:28:55.318 "method": "bdev_nvme_attach_controller", 00:28:55.318 "params": { 00:28:55.318 "trtype": "tcp", 00:28:55.318 "adrfam": "IPv4", 00:28:55.318 "name": "Nvme0", 00:28:55.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.318 "traddr": "10.0.0.2", 00:28:55.318 "trsvcid": "4420" 00:28:55.318 } 00:28:55.318 }, 00:28:55.318 { 00:28:55.318 "method": "bdev_set_options", 00:28:55.318 "params": { 00:28:55.318 "bdev_auto_examine": false 00:28:55.318 } 00:28:55.318 } 00:28:55.318 ] 00:28:55.318 } 00:28:55.318 ] 00:28:55.318 }' 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:55.318 "subsystems": [ 00:28:55.318 { 00:28:55.318 "subsystem": "bdev", 00:28:55.318 "config": [ 00:28:55.318 { 00:28:55.318 "method": "bdev_nvme_attach_controller", 00:28:55.318 "params": { 00:28:55.318 "trtype": "tcp", 00:28:55.318 "adrfam": "IPv4", 00:28:55.318 "name": "Nvme0", 00:28:55.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.318 "traddr": "10.0.0.2", 00:28:55.318 "trsvcid": "4420" 00:28:55.318 } 00:28:55.318 }, 00:28:55.318 { 00:28:55.318 "method": "bdev_set_options", 00:28:55.318 "params": { 00:28:55.318 "bdev_auto_examine": false 00:28:55.318 } 00:28:55.318 } 00:28:55.318 ] 00:28:55.318 } 00:28:55.318 ] 00:28:55.318 }' 00:28:55.318 10:36:19 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.Tk9mKSEViU --ib Nvme0n1 --bs 65536 --count 1 00:28:55.318 [2024-07-15 10:36:20.040602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:55.318 [2024-07-15 10:36:20.040652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966445 ] 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:55.318 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.318 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:55.576 [2024-07-15 10:36:20.133567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.576 [2024-07-15 10:36:20.203313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.092  Copying: 64/64 [kB] (average 62 MBps) 00:28:56.092 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.092 10:36:20 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:28:56.092 10:36:20 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.bgDHyGELUO /tmp/tmp.Tk9mKSEViU 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@25 -- # local config 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:56.350 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:56.350 "subsystems": [ 00:28:56.350 { 00:28:56.350 "subsystem": "bdev", 00:28:56.350 "config": [ 00:28:56.350 { 00:28:56.350 "method": "bdev_nvme_attach_controller", 00:28:56.350 "params": { 00:28:56.350 "trtype": "tcp", 00:28:56.350 "adrfam": "IPv4", 00:28:56.350 "name": "Nvme0", 00:28:56.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.350 "traddr": "10.0.0.2", 00:28:56.350 "trsvcid": "4420" 00:28:56.350 } 00:28:56.350 }, 00:28:56.350 { 00:28:56.350 "method": "bdev_set_options", 00:28:56.350 "params": { 00:28:56.350 "bdev_auto_examine": false 00:28:56.350 } 00:28:56.350 } 00:28:56.350 ] 00:28:56.350 } 00:28:56.350 ] 00:28:56.350 }' 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:28:56.350 10:36:20 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:56.350 "subsystems": [ 00:28:56.350 { 00:28:56.350 "subsystem": "bdev", 00:28:56.350 "config": [ 00:28:56.350 { 00:28:56.350 "method": "bdev_nvme_attach_controller", 00:28:56.350 "params": { 00:28:56.350 "trtype": "tcp", 00:28:56.350 "adrfam": "IPv4", 00:28:56.350 "name": "Nvme0", 00:28:56.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.350 "traddr": "10.0.0.2", 00:28:56.350 "trsvcid": "4420" 00:28:56.350 } 00:28:56.350 }, 00:28:56.350 { 00:28:56.350 "method": "bdev_set_options", 00:28:56.350 "params": { 00:28:56.350 "bdev_auto_examine": false 00:28:56.350 } 00:28:56.350 } 00:28:56.350 ] 00:28:56.350 } 00:28:56.350 ] 00:28:56.350 }' 00:28:56.350 [2024-07-15 10:36:20.980375] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:56.350 [2024-07-15 10:36:20.980421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966645 ] 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.350 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:56.350 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.351 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:56.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.351 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:56.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.351 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:56.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.351 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:56.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.351 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:56.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:56.351 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:56.351 [2024-07-15 10:36:21.071507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.608 [2024-07-15 10:36:21.142267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.866  Copying: 64/64 [kB] (average 10 MBps) 00:28:56.866 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@106 -- # update_stats 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:56.866 10:36:21 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.866 10:36:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.866 10:36:21 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:56.866 10:36:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:56.867 10:36:21 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:56.867 10:36:21 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.bgDHyGELUO --ob Nvme0n1 --bs 4096 --count 16 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@25 -- # local config 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:57.125 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:57.125 "subsystems": [ 00:28:57.125 { 00:28:57.125 "subsystem": "bdev", 00:28:57.125 "config": [ 00:28:57.125 { 00:28:57.125 "method": "bdev_nvme_attach_controller", 00:28:57.125 "params": { 00:28:57.125 "trtype": "tcp", 00:28:57.125 "adrfam": "IPv4", 00:28:57.125 "name": "Nvme0", 00:28:57.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.125 "traddr": "10.0.0.2", 00:28:57.125 "trsvcid": "4420" 00:28:57.125 } 00:28:57.125 }, 00:28:57.125 { 00:28:57.125 "method": "bdev_set_options", 00:28:57.125 "params": { 00:28:57.125 "bdev_auto_examine": false 00:28:57.125 } 00:28:57.125 } 00:28:57.125 ] 00:28:57.125 } 00:28:57.125 ] 00:28:57.125 }' 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:57.125 "subsystems": [ 00:28:57.125 { 00:28:57.125 "subsystem": "bdev", 00:28:57.125 "config": [ 00:28:57.125 { 00:28:57.125 "method": "bdev_nvme_attach_controller", 00:28:57.125 "params": { 00:28:57.125 "trtype": "tcp", 00:28:57.125 "adrfam": "IPv4", 00:28:57.125 "name": "Nvme0", 00:28:57.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.125 "traddr": "10.0.0.2", 00:28:57.125 "trsvcid": "4420" 00:28:57.125 } 00:28:57.125 }, 00:28:57.125 { 00:28:57.125 "method": "bdev_set_options", 00:28:57.125 "params": { 00:28:57.125 "bdev_auto_examine": false 00:28:57.125 } 00:28:57.125 } 00:28:57.125 ] 00:28:57.125 } 00:28:57.125 ] 00:28:57.125 }' 00:28:57.125 10:36:21 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.bgDHyGELUO --ob Nvme0n1 --bs 4096 --count 16 00:28:57.125 [2024-07-15 10:36:21.777939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:57.125 [2024-07-15 10:36:21.777989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966809 ] 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.125 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:57.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.126 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:57.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.126 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:57.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.126 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:57.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:57.126 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:57.126 [2024-07-15 10:36:21.868564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.383 [2024-07-15 10:36:21.939432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.899  Copying: 64/64 [kB] (average 12 MBps) 00:28:57.899 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@114 -- # update_stats 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:57.899 10:36:22 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:57.899 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.157 10:36:22 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@117 -- # : 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.Tk9mKSEViU --ib Nvme0n1 --bs 4096 --count 16 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@25 -- # local config 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:28:58.157 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:28:58.157 10:36:22 chaining -- bdev/chaining.sh@31 -- # config='{ 00:28:58.157 "subsystems": [ 00:28:58.157 { 00:28:58.157 "subsystem": "bdev", 00:28:58.157 "config": [ 00:28:58.157 { 00:28:58.157 "method": "bdev_nvme_attach_controller", 00:28:58.157 "params": { 00:28:58.157 "trtype": "tcp", 00:28:58.157 "adrfam": "IPv4", 00:28:58.158 "name": "Nvme0", 00:28:58.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.158 "traddr": "10.0.0.2", 00:28:58.158 "trsvcid": "4420" 00:28:58.158 } 00:28:58.158 }, 00:28:58.158 { 00:28:58.158 "method": "bdev_set_options", 00:28:58.158 "params": { 00:28:58.158 "bdev_auto_examine": false 00:28:58.158 } 00:28:58.158 } 00:28:58.158 ] 00:28:58.158 } 00:28:58.158 ] 00:28:58.158 }' 00:28:58.158 10:36:22 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.Tk9mKSEViU --ib Nvme0n1 --bs 4096 --count 16 00:28:58.158 10:36:22 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:28:58.158 "subsystems": [ 00:28:58.158 { 00:28:58.158 "subsystem": "bdev", 00:28:58.158 "config": [ 00:28:58.158 { 00:28:58.158 "method": "bdev_nvme_attach_controller", 00:28:58.158 "params": { 00:28:58.158 "trtype": "tcp", 00:28:58.158 "adrfam": "IPv4", 00:28:58.158 "name": "Nvme0", 00:28:58.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.158 "traddr": "10.0.0.2", 00:28:58.158 "trsvcid": "4420" 00:28:58.158 } 00:28:58.158 }, 00:28:58.158 { 00:28:58.158 "method": "bdev_set_options", 00:28:58.158 "params": { 00:28:58.158 "bdev_auto_examine": false 00:28:58.158 } 00:28:58.158 } 00:28:58.158 ] 00:28:58.158 } 00:28:58.158 ] 00:28:58.158 }' 00:28:58.158 [2024-07-15 10:36:22.916758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:58.158 [2024-07-15 10:36:22.916804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966973 ] 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:58.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.415 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:58.416 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:58.416 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:58.416 [2024-07-15 10:36:23.006331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.416 [2024-07-15 10:36:23.075220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.930  Copying: 64/64 [kB] (average 703 kBps) 00:28:58.930 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # opcode= 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # event=executed 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@10 -- # set +x 00:28:58.930 10:36:23 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.bgDHyGELUO /tmp/tmp.Tk9mKSEViU 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.bgDHyGELUO /tmp/tmp.Tk9mKSEViU 00:28:58.930 10:36:23 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:28:58.930 10:36:23 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.930 10:36:23 chaining -- nvmf/common.sh@117 -- # sync 00:28:58.930 10:36:23 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.930 10:36:23 chaining -- nvmf/common.sh@120 -- # set +e 00:28:58.930 10:36:23 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.930 10:36:23 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.930 rmmod nvme_tcp 00:28:58.930 rmmod nvme_fabrics 00:28:59.189 rmmod nvme_keyring 00:28:59.189 10:36:23 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.189 10:36:23 chaining -- nvmf/common.sh@124 -- # set -e 00:28:59.189 10:36:23 chaining -- nvmf/common.sh@125 -- # return 0 00:28:59.189 10:36:23 chaining -- nvmf/common.sh@489 -- # '[' -n 1966005 ']' 00:28:59.189 10:36:23 chaining -- nvmf/common.sh@490 -- # killprocess 1966005 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@948 -- # '[' -z 1966005 ']' 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@952 -- # kill -0 1966005 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@953 -- # uname 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1966005 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1966005' 00:28:59.189 killing process with pid 1966005 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@967 -- # kill 1966005 00:28:59.189 10:36:23 chaining -- common/autotest_common.sh@972 -- # wait 1966005 00:28:59.447 10:36:23 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.447 10:36:23 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.447 10:36:23 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.447 10:36:23 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.447 10:36:23 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.447 10:36:23 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.447 10:36:23 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:59.448 10:36:23 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.352 10:36:26 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:01.352 10:36:26 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:29:01.352 10:36:26 chaining -- bdev/chaining.sh@132 -- # bperfpid=1967559 00:29:01.352 10:36:26 chaining -- bdev/chaining.sh@134 -- # waitforlisten 1967559 00:29:01.352 10:36:26 chaining -- common/autotest_common.sh@829 -- # '[' -z 1967559 ']' 00:29:01.352 10:36:26 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.352 10:36:26 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:01.352 10:36:26 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:01.352 10:36:26 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.352 10:36:26 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:01.352 10:36:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:01.352 [2024-07-15 10:36:26.100577] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:01.352 [2024-07-15 10:36:26.100623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967559 ] 00:29:01.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.610 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:01.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.610 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:01.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.610 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:01.611 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:01.611 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:01.611 [2024-07-15 10:36:26.191178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.611 [2024-07-15 10:36:26.264301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.177 10:36:26 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:02.177 10:36:26 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:02.177 10:36:26 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:29:02.177 10:36:26 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.177 10:36:26 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:02.434 malloc0 00:29:02.434 true 00:29:02.434 true 00:29:02.434 [2024-07-15 10:36:27.008353] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:02.434 crypto0 00:29:02.434 [2024-07-15 10:36:27.016376] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:02.435 crypto1 00:29:02.435 10:36:27 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.435 10:36:27 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:02.435 Running I/O for 5 seconds... 00:29:07.698 00:29:07.698 Latency(us) 00:29:07.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.698 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:07.698 Verification LBA range: start 0x0 length 0x2000 00:29:07.698 crypto1 : 5.01 18605.51 72.68 0.00 0.00 13729.06 4168.09 9489.61 00:29:07.698 =================================================================================================================== 00:29:07.698 Total : 18605.51 72.68 0.00 0.00 13729.06 4168.09 9489.61 00:29:07.698 0 00:29:07.698 10:36:32 chaining -- bdev/chaining.sh@146 -- # killprocess 1967559 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@948 -- # '[' -z 1967559 ']' 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@952 -- # kill -0 1967559 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@953 -- # uname 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1967559 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1967559' 00:29:07.698 killing process with pid 1967559 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@967 -- # kill 1967559 00:29:07.698 Received shutdown signal, test time was about 5.000000 seconds 00:29:07.698 00:29:07.698 Latency(us) 00:29:07.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.698 =================================================================================================================== 00:29:07.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@972 -- # wait 1967559 00:29:07.698 10:36:32 chaining -- bdev/chaining.sh@152 -- # bperfpid=1968614 00:29:07.698 10:36:32 chaining -- bdev/chaining.sh@154 -- # waitforlisten 1968614 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@829 -- # '[' -z 1968614 ']' 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.698 10:36:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:07.698 10:36:32 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:07.698 [2024-07-15 10:36:32.414095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:07.698 [2024-07-15 10:36:32.414145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968614 ] 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.698 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:07.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.699 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.699 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.699 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.699 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:07.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.699 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:07.956 [2024-07-15 10:36:32.503360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.956 [2024-07-15 10:36:32.576630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.522 10:36:33 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:08.522 10:36:33 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:08.522 10:36:33 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:29:08.522 10:36:33 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.522 10:36:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:08.522 malloc0 00:29:08.779 true 00:29:08.779 true 00:29:08.779 [2024-07-15 10:36:33.325815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:08.779 [2024-07-15 10:36:33.325854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.779 [2024-07-15 10:36:33.325872] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x149d3b0 00:29:08.779 [2024-07-15 10:36:33.325880] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.779 [2024-07-15 10:36:33.326612] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.779 [2024-07-15 10:36:33.326630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:08.779 pt0 00:29:08.779 [2024-07-15 10:36:33.333842] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:08.779 crypto0 00:29:08.779 [2024-07-15 10:36:33.341862] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:29:08.779 crypto1 00:29:08.779 10:36:33 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.779 10:36:33 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:08.779 Running I/O for 5 seconds... 00:29:14.117 00:29:14.117 Latency(us) 00:29:14.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.117 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:14.117 Verification LBA range: start 0x0 length 0x2000 00:29:14.117 crypto1 : 5.01 14709.86 57.46 0.00 0.00 17367.26 4115.66 11219.76 00:29:14.117 =================================================================================================================== 00:29:14.117 Total : 14709.86 57.46 0.00 0.00 17367.26 4115.66 11219.76 00:29:14.117 0 00:29:14.117 10:36:38 chaining -- bdev/chaining.sh@167 -- # killprocess 1968614 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@948 -- # '[' -z 1968614 ']' 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@952 -- # kill -0 1968614 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@953 -- # uname 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1968614 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1968614' 00:29:14.117 killing process with pid 1968614 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@967 -- # kill 1968614 00:29:14.117 Received shutdown signal, test time was about 5.000000 seconds 00:29:14.117 00:29:14.117 Latency(us) 00:29:14.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.117 =================================================================================================================== 00:29:14.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@972 -- # wait 1968614 00:29:14.117 10:36:38 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:29:14.117 10:36:38 chaining -- bdev/chaining.sh@170 -- # killprocess 1968614 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@948 -- # '[' -z 1968614 ']' 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@952 -- # kill -0 1968614 00:29:14.117 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1968614) - No such process 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@975 -- # echo 'Process with pid 1968614 is not found' 00:29:14.117 Process with pid 1968614 is not found 00:29:14.117 10:36:38 chaining -- bdev/chaining.sh@171 -- # wait 1968614 00:29:14.117 10:36:38 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.117 10:36:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@296 -- # e810=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@297 -- # x722=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@298 -- # mlx=() 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:29:14.117 Found 0000:20:00.0 (0x8086 - 0x159b) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:29:14.117 Found 0000:20:00.1 (0x8086 - 0x159b) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:29:14.117 Found net devices under 0000:20:00.0: cvl_0_0 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:29:14.117 Found net devices under 0000:20:00.1: cvl_0_1 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:14.117 10:36:38 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:14.118 10:36:38 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.376 10:36:38 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.376 10:36:38 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:14.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:29:14.376 00:29:14.376 --- 10.0.0.2 ping statistics --- 00:29:14.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.376 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:29:14.376 00:29:14.376 --- 10.0.0.1 ping statistics --- 00:29:14.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.376 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@422 -- # return 0 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:14.376 10:36:39 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@481 -- # nvmfpid=1969706 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@482 -- # waitforlisten 1969706 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@829 -- # '[' -z 1969706 ']' 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.376 10:36:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:14.376 10:36:39 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:14.376 [2024-07-15 10:36:39.118534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:14.376 [2024-07-15 10:36:39.118587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:14.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:14.634 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:14.634 [2024-07-15 10:36:39.215416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.634 [2024-07-15 10:36:39.286323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.634 [2024-07-15 10:36:39.286363] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.634 [2024-07-15 10:36:39.286373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.634 [2024-07-15 10:36:39.286381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.634 [2024-07-15 10:36:39.286404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.634 [2024-07-15 10:36:39.286424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:15.199 10:36:39 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:15.199 10:36:39 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.199 10:36:39 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:15.199 malloc0 00:29:15.199 [2024-07-15 10:36:39.960361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.199 [2024-07-15 10:36:39.976521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.199 10:36:39 chaining -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.458 10:36:39 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:29:15.458 10:36:39 chaining -- bdev/chaining.sh@189 -- # bperfpid=1969984 00:29:15.458 10:36:39 chaining -- bdev/chaining.sh@191 -- # waitforlisten 1969984 /var/tmp/bperf.sock 00:29:15.458 10:36:39 chaining -- common/autotest_common.sh@829 -- # '[' -z 1969984 ']' 00:29:15.458 10:36:39 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.458 10:36:39 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.458 10:36:39 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:29:15.458 10:36:39 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.458 10:36:39 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.458 10:36:39 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:15.458 [2024-07-15 10:36:40.034183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:15.458 [2024-07-15 10:36:40.034238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969984 ] 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:15.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:15.458 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:15.458 [2024-07-15 10:36:40.129814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.458 [2024-07-15 10:36:40.204629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.392 10:36:40 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.392 10:36:40 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:16.392 10:36:40 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:29:16.392 10:36:40 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:16.392 [2024-07-15 10:36:41.167975] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:16.392 nvme0n1 00:29:16.392 true 00:29:16.392 crypto0 00:29:16.651 10:36:41 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.651 Running I/O for 5 seconds... 00:29:21.935 00:29:21.935 Latency(us) 00:29:21.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.935 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:29:21.935 Verification LBA range: start 0x0 length 0x2000 00:29:21.935 crypto0 : 5.01 13375.91 52.25 0.00 0.00 19090.36 2503.48 16462.64 00:29:21.935 =================================================================================================================== 00:29:21.935 Total : 13375.91 52.25 0.00 0.00 19090.36 2503.48 16462.64 00:29:21.935 0 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@205 -- # sequence=134150 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@206 -- # encrypt=67075 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.935 10:36:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@207 -- # decrypt=67075 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:22.193 10:36:46 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:22.451 10:36:47 chaining -- bdev/chaining.sh@208 -- # crc32c=134150 00:29:22.451 10:36:47 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:29:22.451 10:36:47 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:29:22.451 10:36:47 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:29:22.451 10:36:47 chaining -- bdev/chaining.sh@214 -- # killprocess 1969984 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@948 -- # '[' -z 1969984 ']' 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@952 -- # kill -0 1969984 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@953 -- # uname 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1969984 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1969984' 00:29:22.451 killing process with pid 1969984 00:29:22.451 10:36:47 chaining -- common/autotest_common.sh@967 -- # kill 1969984 00:29:22.451 Received shutdown signal, test time was about 5.000000 seconds 00:29:22.451 00:29:22.451 Latency(us) 00:29:22.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.452 =================================================================================================================== 00:29:22.452 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.452 10:36:47 chaining -- common/autotest_common.sh@972 -- # wait 1969984 00:29:22.710 10:36:47 chaining -- bdev/chaining.sh@219 -- # bperfpid=1971086 00:29:22.710 10:36:47 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:29:22.710 10:36:47 chaining -- bdev/chaining.sh@221 -- # waitforlisten 1971086 /var/tmp/bperf.sock 00:29:22.710 10:36:47 chaining -- common/autotest_common.sh@829 -- # '[' -z 1971086 ']' 00:29:22.710 10:36:47 chaining -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:22.710 10:36:47 chaining -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:22.710 10:36:47 chaining -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:22.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:22.710 10:36:47 chaining -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:22.710 10:36:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:22.710 [2024-07-15 10:36:47.339656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:22.710 [2024-07-15 10:36:47.339706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971086 ] 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:22.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:22.710 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:22.710 [2024-07-15 10:36:47.432130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.979 [2024-07-15 10:36:47.507531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.548 10:36:48 chaining -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:23.548 10:36:48 chaining -- common/autotest_common.sh@862 -- # return 0 00:29:23.548 10:36:48 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:29:23.548 10:36:48 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:29:23.805 [2024-07-15 10:36:48.450699] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:29:23.805 nvme0n1 00:29:23.805 true 00:29:23.805 crypto0 00:29:23.805 10:36:48 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.805 Running I/O for 5 seconds... 00:29:29.071 00:29:29.071 Latency(us) 00:29:29.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.071 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:29:29.071 Verification LBA range: start 0x0 length 0x200 00:29:29.071 crypto0 : 5.01 2598.52 162.41 0.00 0.00 12071.70 956.83 17825.79 00:29:29.071 =================================================================================================================== 00:29:29.071 Total : 2598.52 162.41 0.00 0.00 12071.70 956.83 17825.79 00:29:29.071 0 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@39 -- # opcode= 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@233 -- # sequence=26020 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:29:29.071 10:36:53 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.328 10:36:53 chaining -- bdev/chaining.sh@234 -- # encrypt=13010 00:29:29.328 10:36:53 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:29:29.328 10:36:53 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:29:29.328 10:36:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.329 10:36:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@235 -- # decrypt=13010 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@236 -- # crc32c=26020 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:29:29.587 10:36:54 chaining -- bdev/chaining.sh@242 -- # killprocess 1971086 00:29:29.587 10:36:54 chaining -- common/autotest_common.sh@948 -- # '[' -z 1971086 ']' 00:29:29.587 10:36:54 chaining -- common/autotest_common.sh@952 -- # kill -0 1971086 00:29:29.587 10:36:54 chaining -- common/autotest_common.sh@953 -- # uname 00:29:29.587 10:36:54 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:29.587 10:36:54 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1971086 00:29:29.845 10:36:54 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:29.845 10:36:54 chaining -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:29.845 10:36:54 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1971086' 00:29:29.845 killing process with pid 1971086 00:29:29.845 10:36:54 chaining -- common/autotest_common.sh@967 -- # kill 1971086 00:29:29.845 Received shutdown signal, test time was about 5.000000 seconds 00:29:29.845 00:29:29.845 Latency(us) 00:29:29.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.845 =================================================================================================================== 00:29:29.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.845 10:36:54 chaining -- common/autotest_common.sh@972 -- # wait 1971086 00:29:29.845 10:36:54 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:29:29.845 10:36:54 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:29.845 10:36:54 chaining -- nvmf/common.sh@117 -- # sync 00:29:29.845 10:36:54 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:29.845 10:36:54 chaining -- nvmf/common.sh@120 -- # set +e 00:29:29.845 10:36:54 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:29.845 10:36:54 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:29.845 rmmod nvme_tcp 00:29:29.845 rmmod nvme_fabrics 00:29:29.845 rmmod nvme_keyring 00:29:29.846 10:36:54 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@124 -- # set -e 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@125 -- # return 0 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@489 -- # '[' -n 1969706 ']' 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@490 -- # killprocess 1969706 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@948 -- # '[' -z 1969706 ']' 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@952 -- # kill -0 1969706 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@953 -- # uname 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1969706 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1969706' 00:29:30.104 killing process with pid 1969706 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@967 -- # kill 1969706 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@972 -- # wait 1969706 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.104 10:36:54 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:30.104 10:36:54 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.637 10:36:56 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.637 10:36:56 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:29:32.637 00:29:32.637 real 0m48.066s 00:29:32.637 user 0m55.778s 00:29:32.637 sys 0m12.665s 00:29:32.637 10:36:56 chaining -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.637 10:36:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:29:32.637 ************************************ 00:29:32.637 END TEST chaining 00:29:32.637 ************************************ 00:29:32.637 10:36:56 -- common/autotest_common.sh@1142 -- # return 0 00:29:32.637 10:36:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:32.637 10:36:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:32.637 10:36:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:32.637 10:36:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:32.637 10:36:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:32.637 10:36:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:32.637 10:36:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:32.637 10:36:56 -- common/autotest_common.sh@10 -- # set +x 00:29:32.637 10:36:57 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:32.637 10:36:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:32.637 10:36:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:32.637 10:36:57 -- common/autotest_common.sh@10 -- # set +x 00:29:37.951 INFO: APP EXITING 00:29:37.951 INFO: killing all VMs 00:29:37.951 INFO: killing vhost app 00:29:37.951 WARN: no vhost pid file found 00:29:37.951 INFO: EXIT DONE 00:29:42.128 Waiting for block devices as requested 00:29:42.128 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:42.128 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:42.128 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:42.128 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:42.128 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:42.128 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:42.128 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:42.386 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:42.386 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:42.386 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:42.644 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:42.644 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:42.644 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:42.901 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:42.901 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:42.901 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:43.159 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:29:48.426 Cleaning 00:29:48.426 Removing: /var/run/dpdk/spdk0/config 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:48.426 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:48.426 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:48.426 Removing: /dev/shm/nvmf_trace.0 00:29:48.426 Removing: /dev/shm/spdk_tgt_trace.pid1696492 00:29:48.426 Removing: /var/run/dpdk/spdk0 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1691511 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1695037 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1696492 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1696964 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1698020 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1698294 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1699160 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1699409 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1699775 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1702859 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1704821 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1705135 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1705458 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1705791 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1706113 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1706399 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1706685 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1706992 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1707718 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1710725 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1711013 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1711334 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1711638 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1711663 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1711921 00:29:48.426 Removing: /var/run/dpdk/spdk_pid1712181 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1712418 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1712653 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1712893 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1713146 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1713422 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1713710 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1713987 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1714268 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1714556 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1714835 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1715124 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1715404 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1715687 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1715969 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1716227 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1716465 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1716717 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1716958 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1717188 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1717443 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1717878 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1718238 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1718528 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1718819 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1719203 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1719644 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1719934 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1720046 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1720515 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1721275 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1721813 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1722091 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1726069 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1728213 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1730392 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1731462 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1732645 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1733073 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1733208 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1733233 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1738087 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1738651 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1739975 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1740260 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1745760 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1747342 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1748242 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1752509 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1754065 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1755090 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1759765 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1762201 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1763099 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1772581 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1774675 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1775681 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1785200 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1787360 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1788378 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1798577 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1801709 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1802847 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1813565 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1815939 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1816943 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1828191 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1830647 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1831796 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1842531 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1846243 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1847313 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1848352 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1851654 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1856973 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1859785 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1865134 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1868777 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1874271 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1877247 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1884026 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1886413 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1892690 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1895109 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1901905 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1904316 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1908785 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1909184 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1909588 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1910126 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1910679 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1911342 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1912274 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1912638 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1914767 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1916870 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1918833 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1920466 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1922584 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1924714 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1926702 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1928554 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1929607 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1930152 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1932257 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1934751 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1937022 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1938353 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1939692 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1940481 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1940518 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1940584 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1940874 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1941146 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1942273 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1944291 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1946167 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1947182 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1948215 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1948532 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1948556 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1948580 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1949704 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1950441 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1950881 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1953145 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1955412 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1957691 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1959014 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1960831 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1961685 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1961752 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1966314 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1966445 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1966645 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1966809 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1966973 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1967559 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1968614 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1969984 00:29:48.427 Removing: /var/run/dpdk/spdk_pid1971086 00:29:48.427 Clean 00:29:48.685 10:37:13 -- common/autotest_common.sh@1451 -- # return 0 00:29:48.685 10:37:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:48.685 10:37:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.685 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:29:48.685 10:37:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:48.685 10:37:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.685 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:29:48.685 10:37:13 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:29:48.685 10:37:13 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:29:48.685 10:37:13 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:29:48.685 10:37:13 -- spdk/autotest.sh@391 -- # hash lcov 00:29:48.685 10:37:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:48.685 10:37:13 -- spdk/autotest.sh@393 -- # hostname 00:29:48.685 10:37:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:29:48.943 geninfo: WARNING: invalid characters removed from testname! 00:30:10.877 10:37:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:10.877 10:37:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:11.812 10:37:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:13.715 10:37:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:15.160 10:37:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:17.059 10:37:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:30:18.435 10:37:42 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:18.435 10:37:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:30:18.435 10:37:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:18.435 10:37:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.435 10:37:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.435 10:37:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.435 10:37:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.435 10:37:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.435 10:37:43 -- paths/export.sh@5 -- $ export PATH 00:30:18.435 10:37:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.435 10:37:43 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:30:18.435 10:37:43 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:18.435 10:37:43 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721032663.XXXXXX 00:30:18.435 10:37:43 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721032663.SM1MWK 00:30:18.435 10:37:43 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:18.435 10:37:43 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:18.435 10:37:43 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:30:18.435 10:37:43 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:18.435 10:37:43 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:18.435 10:37:43 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:18.435 10:37:43 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:18.435 10:37:43 -- common/autotest_common.sh@10 -- $ set +x 00:30:18.435 10:37:43 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:30:18.435 10:37:43 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:18.435 10:37:43 -- pm/common@17 -- $ local monitor 00:30:18.435 10:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:18.435 10:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:18.435 10:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:18.435 10:37:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:18.435 10:37:43 -- pm/common@25 -- $ sleep 1 00:30:18.435 10:37:43 -- pm/common@21 -- $ date +%s 00:30:18.435 10:37:43 -- pm/common@21 -- $ date +%s 00:30:18.435 10:37:43 -- pm/common@21 -- $ date +%s 00:30:18.435 10:37:43 -- pm/common@21 -- $ date +%s 00:30:18.435 10:37:43 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032663 00:30:18.435 10:37:43 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032663 00:30:18.435 10:37:43 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032663 00:30:18.435 10:37:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032663 00:30:18.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032663_collect-vmstat.pm.log 00:30:18.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032663_collect-cpu-load.pm.log 00:30:18.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032663_collect-cpu-temp.pm.log 00:30:18.435 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032663_collect-bmc-pm.bmc.pm.log 00:30:19.373 10:37:44 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:19.373 10:37:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:19.373 10:37:44 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:19.373 10:37:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:19.373 10:37:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:19.373 10:37:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:19.373 10:37:44 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:19.373 10:37:44 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:19.373 10:37:44 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:30:19.631 10:37:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:19.631 10:37:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:19.631 10:37:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:19.631 10:37:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:19.631 10:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.631 10:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:19.631 10:37:44 -- pm/common@44 -- $ pid=1983435 00:30:19.631 10:37:44 -- pm/common@50 -- $ kill -TERM 1983435 00:30:19.631 10:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.631 10:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:19.631 10:37:44 -- pm/common@44 -- $ pid=1983436 00:30:19.631 10:37:44 -- pm/common@50 -- $ kill -TERM 1983436 00:30:19.631 10:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.631 10:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:19.631 10:37:44 -- pm/common@44 -- $ pid=1983438 00:30:19.631 10:37:44 -- pm/common@50 -- $ kill -TERM 1983438 00:30:19.631 10:37:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:19.631 10:37:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:19.631 10:37:44 -- pm/common@44 -- $ pid=1983461 00:30:19.631 10:37:44 -- pm/common@50 -- $ sudo -E kill -TERM 1983461 00:30:19.631 + [[ -n 1567629 ]] 00:30:19.631 + sudo kill 1567629 00:30:19.640 [Pipeline] } 00:30:19.655 [Pipeline] // stage 00:30:19.660 [Pipeline] } 00:30:19.677 [Pipeline] // timeout 00:30:19.682 [Pipeline] } 00:30:19.699 [Pipeline] // catchError 00:30:19.705 [Pipeline] } 00:30:19.722 [Pipeline] // wrap 00:30:19.726 [Pipeline] } 00:30:19.738 [Pipeline] // catchError 00:30:19.747 [Pipeline] stage 00:30:19.749 [Pipeline] { (Epilogue) 00:30:19.760 [Pipeline] catchError 00:30:19.762 [Pipeline] { 00:30:19.776 [Pipeline] echo 00:30:19.778 Cleanup processes 00:30:19.784 [Pipeline] sh 00:30:20.065 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:20.065 1983537 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:30:20.066 1983883 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:20.080 [Pipeline] sh 00:30:20.360 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:30:20.360 ++ grep -v 'sudo pgrep' 00:30:20.360 ++ awk '{print $1}' 00:30:20.360 + sudo kill -9 1983537 00:30:20.373 [Pipeline] sh 00:30:20.656 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:20.656 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:24.841 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:30:29.033 [Pipeline] sh 00:30:29.314 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:29.314 Artifacts sizes are good 00:30:29.327 [Pipeline] archiveArtifacts 00:30:29.334 Archiving artifacts 00:30:29.460 [Pipeline] sh 00:30:29.742 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:30:29.762 [Pipeline] cleanWs 00:30:29.774 [WS-CLEANUP] Deleting project workspace... 00:30:29.774 [WS-CLEANUP] Deferred wipeout is used... 00:30:29.781 [WS-CLEANUP] done 00:30:29.783 [Pipeline] } 00:30:29.806 [Pipeline] // catchError 00:30:29.820 [Pipeline] sh 00:30:30.099 + logger -p user.info -t JENKINS-CI 00:30:30.108 [Pipeline] } 00:30:30.126 [Pipeline] // stage 00:30:30.131 [Pipeline] } 00:30:30.144 [Pipeline] // node 00:30:30.149 [Pipeline] End of Pipeline 00:30:30.175 Finished: SUCCESS